Url: ftp://90.188.39.6/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.mkv | 7,422 | 3.48 TiB |
.avi | 2,607 | 626.78 GiB |
.mp4 | 1,672 | 562.61 GiB |
779 | 150.72 GiB | |
.mp3 | 17,948 | 108.34 GiB |
Dirs: 5,409 Ext: 97 | Total: 54,956 | Total: 5.04 TiB |
Date: 2020-01-05 14:51:45 | Time: 00:13:20 |
Fancy telling me where you found it, if you run it, how you know it's 10TB and then removing the post so I can mirror it properly without it being bogged down by random people hitting it.
If so I'll serve it back to everyone without bandwidth limits, data lost.
/u/iamjoshybear looks like you got us locked out, well done.
Got access from the owner, /u/oVPN_to thanks dude, mirror will be up on
~~Mirror~~,~~ leech at will.~~
DMCA hit this directory, unfairly I should add as it was generic request to our host from PRS and not an official dmca request.
You can set up IFTTT to do this for you. (It's free, I use it to automate quite a bit of things for myself.) Here is a link to the recipe you would need for new post notifications https://ifttt.com/applets/235137p-reddit-new-posts-in-subreddit-notification
I've said it before and I'll say it again, as someone who runs the largest on purpose open directory for this community, if you have to ask you're not qualified. All you'll do is waste your time, your money and ultimately fail in what you wish to achieve.
If you want to share your files with this community send them to me and I'll host them on the-eye.
These are all in pdf? If you mean the entire Playboy collection, The Eye hosts an impressive library. Plenty of other issues can also be found on the web, both digital and scanned.
All bitdownload.ir directories on FP
https://filepursuit.com/discover.php?startrow=300
Edit: WTF guys, stop running automated scripts on my website, you'll force me to enable rate limiting on it.
Edit2: Thanks http://imgur.com/a/Zg3Js
But also...
>I have a server, but it costs 120$ per year.
I have a server, but it costs $300 per month!!
Why can't you afford $120 a year, and what even is this project?
I mirrored freemozaweb.eu/k%C3%B6nyvek/
and got 16.8GiB Items: 53359
and that's here.
And this companion book is HILARIOUS. Our copy arrived yesterday. It’s a “children’s” book that is definitely not written for kids.
lol, i was confused because I didn't recognize the text of your title, but the link is purple as fuck Bleddyn_
I'm glad to see your enthusiasm though. here's a tip: https://filepursuit.com
search for anything and you'll find a lot of these domains yo
Actually, RAID 5 is generally not a good idea anymore.
There's a very good chance that in the event of a drive failure you will lose the entire array, either through a second drive failure, or an unrecoverable read error while rebuilding the array.
RAID 6 (two parity drives) gives more protection against drive failure, but still doesn't protect against user error, malicious software, fire, etc.
So if data is important, an offline backup is always a good idea.
Guys, be careful. Here is a comment from /r/piracy about this guy:
It's you again... I don't think some of us forgot about your cryptic (i.e. shady) usage policy, let alone the comments you made and deleted here on /r/piracy during the previous month when trying to advertise your website.
Not only that, but you've been doing the same to spam /r/opendirectories as well with a lot of your submissions in trying to claim that you found this great site and/or creating misrepresenting titles about your website and then commenting in to talk more about your website as if you were a regular visitor of the said site.
You've done a masterful job at hiding and deleting a lot of your comments in the past which included blanket statements of how you collect analytics just like every other site does... this is not not transparency - this is misleading at best.
From your usage policy at the current time of posting this comment :
https://filepursuit.com/usage.html
We also reserve the rights to publish any information regarding violations. Info hashes, IP addresses and all other information that is supplied to the site will be considered our right to publish.
This policy may change at any time, please check in before using the site.
Connecting to our site means that you accept this policy agreement.
previous reference comment of mine about this (notice all of the deleted comments which were all by /u/azharxes ) :
Thanks I just checked that out. Looks like open source stuffs. I'm hoping to find things like iPlanet, Oracle, Peoplesoft, legal software, factory software, firmware, commercial ISOs, etc. Maybe if I start a sub-reddit and kick things off by uploading the few I have access to it will catch on? Haven't found many good torrents for this stuffs either.
EDIT: https://the-eye.eu/public/MSDN is the best I found so far. Thanks to this sub for helping me find that.
Do not use this mirror, it is slow. This content is mirrored from The Eye. If you want a faster experience, try downloading from us directly.
https://the-eye.eu/public/Albums/1001%20Albums%20You%20Must%20Hear%20Before%20You%20Die/
Thanks!
Holy sh1t technology is great. I even found a mix I've been looking for 10 years since I lost the CD. Couldn't find it anywhere.
No, you absolutely cannot be sure if you do not mask your identify (VPN, open wireless network, etc.) I guarantee you there are copyright-friendly entities watching this subreddit, waiting for the option to pounce on OD that are sharing their info. And you can sure as hell bet they are going to be monitoring this tracker at some point too. Big corporate has plenty of money and little sense. Instead of embracing tech, they fight it.
Check out Private Internet Access for a VPN. They support torrenting and have a tested "no logs" policy. I have my subscription via BTC.
I made several comments here and elsewhere discussing google search operators. I'll update this with actual comment soon
As promised.
I posted this in the piracy sub not too long ago
I'll very briefly caveat in case you're unfamiliar. As with any operators, commands, scripts, etc., every character holds value (spaces will kill your attempt). That being said, don't expect magical results in your first few attempts. You can search for "google search commands" or "operators" to read up on it, but you'll have to filter through the BS sites that are listed up front. I ran across a decent one here
Below is what I've used for books. copy and paste & change as needed (recommend doing so after reviewing this)
-inurl:htm -inurl:html intitle:"index of" winnie the pooh pdf
index of
and/or winnie the pooh pdf
with your desired term(s), combo, even file extension-
tells the search to omit; in this case, omit htm and html. See operators below for example of how I wanted "html" in the url.Here's an easy one I did for a friend who wanted to search craigslist nationwide for a Chevelle between the years of 1965-1972 for $6000 or less without having to use horrendous websites/apps. (I always use the $2 as minimum to eliminate the ads shown as $1 or without a price, though you'll still get results showing an ad for $35 and the body will say something like "$35,000" - just takes refinement to get rid of those results)
inurl:"html" intitle:"chevelle" intitle:"1965..1972" intitle:"$2..$6000" site:"craigslist.org"
Edit: corrected what I meant to provide.
Url: https://johnsoncn.com/toolbox/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.exe | 58 | 1.48 GiB |
.zip | 4 | 20.31 MiB |
.msi | 4 | 16.41 MiB |
.jar | 1 | 57 kiB |
.bat | 6 | 8.04 kiB |
Dirs: 19 Ext: 8 | Total: 78 | Total: 1.51 GiB |
Date: 2020-01-06 20:40:15 | Time: 00:00:03 | Speed: 30.6+ MB/s (245+ mbit) |
https://the-eye.eu/public/Radio/loveline.phil21.net/
This is actually worth a read this guy put in some work!! And I can't see this site going anywhere either, the server (and cache?) peaked at 501MB/s when I was mirrored it!! Like I saw this comment just now, downloaded in under 3 minutes...
See mod_status. It is usually IP address restricted.
I've written a Python script to get all the download URLs in one place so we can use wget or a similar tool. I've captured around 4500 URLs so far. It's still running.
EDIT: It's done. Updated list of links -You can now use wget.
This is awesome. Now I just separate all the text files into a pile, and use Notepad++ to find keywords for what I want.
Since I am interested in photography, I just typed "photo" into the Find feature, and bam...all the URL's I am interested in pop up. Use them however you want.
Yes. it performs really well as well. Wii, Wii U and Gamecube have really good emulators and run well.
Dolphin Emulator for Wii & Gamecube
Cemu for Wii U
Start a web server and don't include an index.html in the main directory...
Here, goto https://www.000webhost.com/ and sign up with a free web server. Don't include an index.html for the site. Everyone will then be able to see the contents of the other files in the directory - as if it were ----- open or something....
Holy shit that thing's huge. I'm gonna need a day. Or ten. No bamboozle though.
EDIT: I'll probably start around Monday. I'll be editing this post with current status.
EDIT2: Done. Only took 23 days. Download here.
EDIT3: Bayfiles mirror, just in case. ^^And ^^thanks ^^for ^^the ^^gold
As another has said, running this from home would kill your bandwidth.
If you insist, I would first setup a virtual machine to run everything on. Then make sure it's completely isolated from the other devices on your network. Routers usually have a setting for this, like a vlan.
Next you'll probably want to run it from behind a VPN at least. I always recommend AirVPN. Install a VPN on it and only forward the ports through your VPN, and make sure to bind everything to the VPNs tap adapter, instead of your actual physical ethernet one.
For the sever itself, basic is better. The more basic your software setup is the less chance it will be hackable. If you want a website or something to go with it, make sure everything is static. Things like Jekyll and Hugo are good for this.
Honestly though, I wouldn't do this myself. The bandwidth hit is just too much. If you really want to share everything, consider buying a large seedbox and hosting the files there. FeralHosting is a good one.
Url: https://www.surf-communication.com/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.mp4 | 10,826 | 1.17 TiB |
Dirs: 42 Ext: 1 | Total: 10,826 | Total: 1.17 TiB |
Date: 2019-11-24 19:57:13 | Time: 00:00:31 | Speed: 4.5 MB/s (36 mbit) |
. . . . .well fuck - thank you!
Now I just gotta scan all of these and make a database of all words - which you can do with DocFetcher (http://docfetcher.sourceforge.net/en/index.html) - its an offline database creator, sorta like google but for your own documents.
For a linux client, I am using eiskaltdcpp from the ubuntu repos. Click favorite hubs -> add new. It took me awhile to figure that out, so I hope this helps someone.
edit: much better version (eiskaltdcpp-gtk) available through ppa:tehnick/tehnick
Audible has some very good deals outside of the regular subscription.
Search for bargains on Amazon Kindle books. Sometimes you can get them free -- sometimes for a dollar. Once you've bought the kindle version (even if it was free) you're eligible for a huge discount on the Audible version.
Choosing one at random, this Kindle book is currently free. "Buy" it for free, and add the Audible version for only $1.99. The audio version costs 14 bucks if you don't buy the Kindle version. It's nutty, but that's how it works.
Best way to find these is by selecting the "Whispersync for Voice" option on the side menu. Not all books have this good of a deal, but many do.
Also, never pass up the chance to get a free Kindle book, even if it doesn't have an audible version. You can always go back later and get the same deal on the audiobook when its released.
Use the Amazon Matchmaker to find Audible deals on Kindle books you already own.
Always remember that your Audible subscription means you're paying $14 per book if all you get is the one book a month. Game the system, and get a lot more for cheap.
Cnet says:
Although it is said to be for overall women's health, this application is geared towards conception. 4WomenOnly records your menstrual cycle, calculates future cycles, determines fertile days of the month, and ideal conception dates. Thankfully, the calendar wizard helps you set up since the application's interface is quite cluttered. The extensive help menu explains some issues regarding women's health, including modern medical procedures for infertility, and it uses color-coding to indicate menstrual cycles, ovulation days, and increased probabilities of conceiving a specific gender. You also can track temperatures, cervical fluids, and estimated due dates to establish monthly patterns to best gauge fertility and conception. You can password-protect 4WomenOnly and save and print your data in XML, HTML, and TXT formats. Overall, this 30-day trial program is ideal for women who wish to learn more about their fertility cycles to plan or prevent pregnancies.
Url: http://www.la-star.com/p/pics/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.jpg | 48,665 | 7.97 GiB |
.jpeg | 120 | 4.23 MiB |
.jpg-v-0 | 46 | 2.92 MiB |
19 | 2.03 MiB | |
.jpg-__squarespace_cacheversion-1221499861617 | 2 | 1.72 MiB |
Dirs: 2,475 Ext: 94 | Total: 356,259 | Total: 7.99 GiB |
Date: 2019-12-01 20:00:08 | Time: 00:01:35 | Speed: 7.4 MB/s (59 mbit) |
After looking around a bit, I couldn't find a way to get the whole largest images from a single url. On the site itself, they are served in pieces, and loaded and tiled with a map lib called Leaflet.
But if you are feeling adventurous, getting the pieces and reassembling them Is not impossible. Here's some hints on that:
Each tile is located at a URL of the form http://quod.lib.umich.edu/cgi/i/image/api/tile/{c}/{entryid}/{viewid}/{z}-{x}-{y}.jpg
.
{c}
seems to always be "lbc2ic".{entryid}
is the id of the image.{viewid}
doesn't seem to be interpreted by the server. The site sets it to values like "SCLP_0001" - similar to the entry ID.{z}
denotes the zoom level. Use 0 for the maximum one (original image size).{x}
and {y}
are integers, representing an offset of 256 px per unit (with (0, 0) being the top-left corner).The api serves empty files for x, y, and z values that are too large to make sense.
Well it obviously died then didn't it.
It's not a good idea to think of "Mirrors" being available for these. If anything, they are the mirrors. Albeit unintentionally.
Which is also why this one has gone away.
[09:05:08 fb@edge01:~ ] # host home.innet.pro home.innet.pro has address 95.73.137.183
The host actually still exists and resolves.
[09:01:03 fb@edge01:~ ] # nmap -Pn home.innet.pro -p 21,80,443
Starting Nmap 6.40 ( http://nmap.org ) at 2018-08-28 09:05 AEST Nmap scan report for home.innet.pro (95.73.137.183) Host is up. PORT STATE SERVICE 21/tcp filtered ftp 80/tcp filtered http 443/tcp filtered https
But no longer has any open ports. Either the IP was changed or the more likely answer; the hosting webserver was taken down or told to not listen publicly.
The patent folder contains multiple folders with all kinds of other music. The “fire” folder for example contains a lot of rap music.
I think these songs belang to this app. http://download.cnet.com/Clue-Radio/3000-31709_4-77290230.html
We support this. However, /u/Xosrov_ I'd prefer you link these on the github rather than the json on google drive, for preservation reasons. Also denote the differences between piracy/no piracy because even I was presumptive, heh edit.
> https://the-eye.eu/dbP.json.7z
> https://the-eye.eu/dbNoP.json.7z
Also are you the dude from discord I forget the name of that was hashing the site? If so shout at me there.
You need to alter the options you're using a bit. You need to turn on recursive downloading (-r) for a start. Also, most of the pictures on the subreddit aren't actually hosted on reddit.com, so you'll need to use options for Spanning Hosts. I'm sure other people can finetune your command usage more precisely...
Also keep in mind that wget is mostly designed for sites with relatively simple, static designs - sites like Reddit can pose problems whether or not your wget usage is correct.
>go to the release page
>download and run the .jar file under the latest release
You should be able to double click it in windows once java's installed, if not the command line is
java -jar /path/to/where/you/saved/the/jar/file/ripme.jar
once it's open it's fairly straight-forward - paste the url (it can even catch urls from the clipboard), set the destination directory and click "Rip". There are bells an whistles, take some time to have a look at the settings.
And enjoy your tame no-nipples tumblr pr0n!
Hey, I'm pretty new to this, but maybe this will help someone, or maybe someone can simplify my process. I went to that sirus20x6 link, saw a bunch of movies, thought it would be nice to have those, so I saved the web page, then opened that with a text editor, and took that garbled mess over to regexr.com and regexed it into just a series of links. Then I took that and saved it to a text file. Then I uploaded that text file to *my* seedbox, and just hit wget -i <file> to download everything from that directory without using my local internet. Seems pretty clean and functional. You guys are totally gonna tell me there's just a program that does that, aren't you?
hey reddit!
that was me who collected this stuff and this shouldn't go offline within next few days...
stuff was there for 2-3 years and have copies too, just in case any dumbster tries to send an abuse. love you!
we have to organize the share because you are leeching the server down:
Will give access to exactly 1 trustworthy person atm, any who is able to share the stuff on own server(s) too.
we have many classics in there and thats all from emule!
some index is here:
get it from emule and share it yourself! never received any abuse from this sets!
that's culture and important part of our history! ;)
thank you
Peace!
The eye responds to DMCA take downs, so while they might host copyrighted material, if the copyright owner contacts them they will remove it: https://the-eye.eu/dmca/
Alexandria Library was shut down due to legal pressure from Pluralsight for hosting pirated study materials for IT certifications: https://www.reddit.com/r/opendirectories/comments/87oqxy/alexandria_library_is_burning_again_grab_n_go/ so I highly doubt that material will ever show up on The Eye.
No cached version of this page is available. Error 520 Ray ID: 39177cbd8d3e56c3 • 2017-08-20 18:41:54 UTC Web server is returning an unknown error
Error 525 Ray ID: 391780ab9c6756c3 • 2017-08-20 18:44:41 UTC SSL handshake failed
You can locate file listings by searching for websites where the title contains "index of /". And then you can search the HTML of those files for "mkv" if you only want to see servers that list them. Here's those 2 concepts combined:
https://www.shodan.io/search?query=http.title%3A%22index+of+%2F%22+http.html%3Amkv
Disclaimer: we look at this sort of information to help flag those servers as high-risk and potentially compromised.
It's hosted at IBM...
Flip the ftp to http -> http://170.225.15.61/
https://ipinfo.io/AS18703/170.225.15.0/24
Whois Details
NetHandle: NET-170-224-0-0-1
OrgID: IBM-1
Parent: NET-170-0-0-0-0
NetName: IBM-COMMERCIAL
NetRange: 170.224.0.0 - 170.227.255.255
NetType: assignment
RegDate: 1995-04-21
Updated: 2007-01-31
AbuseHandle: ORGAB-ARIN
Source: ARIN
OrgID: IBM-1
OrgName: IBM
CanAllocate:
Street: 3039 Cornwallis Road
City: Research Triangle Park
State/Prov: NC
Country: US
PostalCode: 27709-2195
RegDate: 1992-02-08
Updated: 2017-11-30
OrgTechHandle: RAIN-ARIN
OrgAdminHandle: RAIN-ARIN
OrgAbuseHandle: RAIN-ARIN
Source: ARIN
Look into a program called Nmap. It should be able to do what you want. Here is a link to the reference guide on their website: http://nmap.org/book/man.html - I do believe it will do what you want (or at least get you started), but it can be a little comlicated. Spend some time learning it's syntax and give it s shot. Report back and let us know if you were successful, or if you run into trouble.
EDIT: There are a couple other programs I remembered. Wireshark is one, it is also very powerful and might accomplish what you want. Check out this list for some more ides: http://sectools.org/tag/sniffers/
A firefox add-on like this one lets you grab all links on a page.
A downloader like this one lets you paste those links as a download task.
That is very annoying! Especially with external drives being so cheap.
A great backup program is anything that is rsync based. Grsync is free & open-source GUI front-end for rsync that works in Windows, Mac OS, and Linux. Get it!
To answer the question for Android users: Listen Audiobook Player is by far the best mp3 audiobook available. It costs like a dollar but it far outshines it competition--i.e. Mort Audiobook Player.
Take it from a guy that's dropped some $8,000 on Audible audiobooks.
https://play.google.com/store/apps/details?id=app.greyshirts.sslcapture
It's a non root packet capture app. Though you will need to patiently go through whatever it logs because there would be lot of connections to Google's server play services .... Yes could you tell me about the app you are using for the magazines ? Seems interesting.
Url: https://www.hwinfo.com/Chernobyl/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.jpg | 175 | 94.54 MiB |
27 | 56.31 MiB | |
.tif | 10 | 6.09 MiB |
.ppt | 1 | 4.7 MiB |
.png | 3 | 1.15 MiB |
Dirs: 14 Ext: 9 | Total: 224 | Total: 164.32 MiB |
Date: 2019-05-22 14:30:03 +00:00 | Time: 00:00:03 | Speed: 7.5 MB/s (60 mbit) |
This might help..found it here.
http://www.gnu.org/software/wget/manual/wget.html#Recursive-Accept_002fReject-Options
2.12 Recursive Accept/Reject Options
‘-A acclist --accept acclist’‘-R rejlist --reject rejlist’
Specify comma-separated lists of file name suffixes or patterns to accept or reject (see Types of Files). Note that if any of the wildcard characters, ‘’, ‘?’, ‘[’ or ‘]’, appear in an element of acclist orrejlist, it will be treated as a pattern, rather than a suffix. In this case, you have to enclose the pattern into quotes to prevent your shell from expanding it, like in ‘-A ".mp3"’ or ‘-A '*.mp3'’.
‘--accept-regex urlregex’‘--reject-regex urlregex’
Specify a regular expression to accept or reject the complete URL.
‘--regex-type regextype’
Specify the regular expression type. Possible types are ‘posix’ or ‘pcre’. Note that to be able to use ‘pcre’ type, wget has to be compiled with libpcre support.
you'd have to specify the directories in a list then. Either output all the directories you current have for the exclude option or the ones left for the include option
http://www.gnu.org/software/wget/manual/html_node/Recursive-Accept_002fReject-Options.html
Yes, FileZilla, because of FTP, which is removed in the newest versions of Chrome/Edge.
https://filezilla-project.org/download.php
Or portable version:
https://portableapps.com/apps/internet/filezilla_portable
And then use the directory above, not a direct link to a file, like this:
Here's a good explanation: http://www.makeuseof.com/tag/how-to-find-unprotected-website-directories-get-interesting-files/
There is also an interface on this page: http://palined.com/search/ that builds the search strings for you, depending on what type of files you're looking for. Try it.
-P <Path of download directory>
Personally I use downloadthemall extention for firefox.
The total filesize would be much less if you skip the jpgs/epubs and ONLY download the pdfs.
>How do I trust that the providers dont moniterr or find any personal info or financial info if Im banking online and stuff.
I don't use VPN's as such but I'd definitely look for "No Logging".
When I do use a vpn I use openvpn and a config file from vpngate.
A little PROTIP:
if you have a halfway decent bank and use a vpn for banking be prepared that they may stop payment made when you're using the vpn as your ip isn't likely to be from the same country as you normaly are using.
I don't see this program recommended enough on this sub: VisualWget (here or here, I can't remember which has the latest). It gives you all of the control of wget without having to memorize command flags. I set up a single default profile with everything the way I like it, so that each new task has those defaults and I can tweak them if necessary.
I usually grab everything.
~~Having said that, anyone got an estimate on size? Only got ~3TB free.~~
Thanks to Pascal over at the-eye discord, we now know /miku/ is 125.6GB. Speeds are decent but response time is in minutes, so will take a few ~~hours~~ days (see my other comment). There's a torrent here.
The PD is ~4.8TB. Good luck!
Yes, the none https mirror is now down, to cover... this was a pm I sent to another hoarder.
>
> I don't support wget on windows for reasons like this, or windows in general. The none https mirror will be down for the foreseeable future and this is out of my hands. I'm not willing to lessen the security on the remaining server because you can't figure out how to download these by other means, wget on windows is moot for reasons like the problem you're experiencing and you're better off using a manager like IDM if you're married to windows.
>
> https://datahoarder.eieidoh.net/msdn/
If you're not on windows and you're using a real os and version of wget, google, find out you need to recompile wget and do so (if --no-check-certificate doesn't work)
IPFS would be perfect for something like this! How to with IPFS:
ipfs add -r <dir>
which gives you a <hash>Assuming the daemon is running, you're "hosting" the data now. Now you can share the hash with people which can then either
ipfs get <hash>
which is just going to download the data from a peeripfs pin <hash>
which is going to download the data and redistribute itThis also adds the benefit that the hash you'd post identifies the data, so it cannot be changed just like that.
Yeah, VPN's are really important. But when using one, make sure that it doesn't take logs. ExpressVPN is recommended for speed, ProtonVPN for security and if you want some all-round good ones, use Nord, Mullvad
Note, these books are not at all the best way to learn about Buddhism. Look up Plum Village and Thich Nhat Hanh if anyone is new to the religion and wants to learn about Buddhism. He is the modern master of Mindfullness and an expert on teaching buddhism that applies to modern daily life.
This autobiography of the Dalai Lama is also a good introduction to Buddhism and an inspiring read. https://www.amazon.com/My-Land-People-Original-Autobiography/dp/0446674214/
>And if I setup this server and then create hotpot and connect other device to that hotspot, will the files be accessible?
Yes. Alternatively you can try this app which has WiFi hotspot feature integrated: https://play.google.com/store/apps/details?id=com.medhaapps.wififtpserver
Almost all the stock market information on the internet is someone who is trying to lead you astray with (at best) some gimmick.
Exception: https://www.bogleheads.org/
Here's an audio book:
https://www.amazon.com/Random-Walk-Down-Wall-Street/dp/B0118LNMA4
this has already been posted here.
and it is not 10 TB.
/Incoming/ == 2.5 TB Total
and
== 1.5 TB
~~mirror is now private (closed) again, because~~ people are panic leeching the server to death and we can't even stream a single mp3 without crackling.
~~READ ME~~
If you want to mirror the stuff, stick with 1 connection only. thank you.
~~READ ME~~
archivist got an access and (s)he'll mirror the stuff.
we can rsync updates to any remote ssh server, send me PM with your ssh user credentials (requirements: user with rssh shell, only rsync allowed and rsync installed) your server should have at least 1.6 TB of free space to mirror complete folder
an index is available here:
if anyone else wants to provide a public mirror: send me PM!
peace
Mirror~~, ~~leech at will.~~
DMCA hit this directory, unfairly I should add as it was generic request to our host from PRS and not an official dmca request.
Only 1 of the 5 are 'supported', will look into it later/someday.
Url: - | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.iso | 13 | 448.54 GiB |
.mkv | 6 | 39.8 GiB |
.rar | 23 | 22.22 GiB |
Dirs: 4 Ext: 3 | Total: 42 | Total: 510.57 GiB |
Date (UTC): 2020-06-17 18:02:50 | Time: 00:00:49 | Speed: 30.1+ MB/s (241+ mbit) |
^(Created by KoalaBear84's OpenDirectory Indexer)
Url: ftp://ftp.hs-niederrhein.de/pub/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.deb | 825,987 | 763.51 GiB |
.iso | 138 | 137.2 GiB |
.xz | 85,586 | 92.39 GiB |
35,438 | 23.38 GiB | |
.bz2 | 4,453 | 20.71 GiB |
Dirs: 71,674 Ext: 2,718 | Total: 1,139,373 | Total: 1.07 TiB |
Date (UTC): 2020-06-11 20:56:45 | Time: 00:17:07 |
^(Created by KoalaBear84's OpenDirectory Indexer)
Since you haven't been specific
Also I'd call this a rule 2 violation
Yeah it's a stretch as OP hasn't akshully asked but it's implicit.
The Download Assistant for Chrome that this entire thread is about...it's even in the title:
> Download Assistant for Chrome lets you send the files you want to download to Wget (and others)
I just assumed that it was talking about Windows because I use Windows. I hoped there was a way to make it work with Windows.
Hmm, wtf, already within 12 days.. I'll reconsider switching to another hosts. (Yes, updated to anonfiles.com for next version)
https://anonfiles.com/x0EbE4T8ua/https_images.hdqwalls.com_wallpapers_txt
Yes, that's the comment. In case you do not have one. You need a torrenting program to download it like U-Torrent or Deluge.
I use Deluge. You can download it from here http://deluge-torrent.org/
I'm not sure why you cannot download it otherwise.
We operate our own hardware in colo so we have no other monthly costs than bandwidth for our primary server, which is 10Gbit/s. Here's a history of where we're been before colo and we now serve just under or a little over 2PB/month these days.
You can monitor our traffic here.
I posted this in the piracy sub not too long ago
I'll very briefly caveat in case you're unfamiliar. As with any operators, commands, scripts, etc., every character holds value (spaces will kill your attempt). That being said, don't expect magical results in your first few attempts. You can search for "google search commands" or "operators" to read up on it, but you'll have to filter through the BS sites that are listed up front. I ran across a decent one here
Below is what I've used for books. copy and paste & change as needed (recommend doing so after reviewing this)
-inurl:htm -inurl:html intitle:"index of" winnie the pooh pdf
index of
and/or winnie the pooh pdf
with your desired term(s), combo, even file extension-
tells the search to omit; in this case, omit htm and html. See operators below for example of how I wanted "html" in the url.Here's an easy one I did for a friend who wanted to search craigslist nationwide for a Chevelle between the years of 1965-1972 for $6000 or less without having to use horrendous websites/apps. (I always use the $2 as minimum to eliminate the ads shown as $1 or without a price, though you'll still get results showing an ad for $35 and the body will say something like "$35,000" - just takes refinement to get rid of those results)
inurl:"html" intitle:"chevelle" intitle:"1965..1972" intitle:"$2..$6000" site:"craigslist.org"
> We recommend that you use rsync. The wget and cURL tools are not suitable [For mirroring], because they need to look at all files just to get the ones that were updated recently.
ftfy
They absolutely do support wget. Its on their other page. They just don't recommend wget for mirroring because its a waste of resources to pull down the whole lib just to update new additions.... at least that's how i interpreted this page
Oh yeees. a one-eyed is better than a blind. https://alternativeto.net/software/telegram/?license=opensource
And of course with a closed protocol and server, you put your trust in a private corp exactly as for Mark.
I'm lost at Linux. Here is the page with links to the
intermediate and root certs:
https://letsencrypt.org/certificates/
You would want:
Let’s Encrypt Authority X3 (IdenTrust cross-signed) because it is not included from this web site.
Url: http://s91291220.onlinehome.us/formica/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.mp4 | 3 | 48.6 MiB |
.jpg | 169 | 21.94 MiB |
.png | 120 | 18.15 MiB |
.mp3 | 23 | 15.89 MiB |
.gif | 43 | 6.95 MiB |
Dirs: 7 Ext: 11 | Total: 381 | Total: 113.49 MiB |
Date: 2020-01-08 07:26:00 | Time: 00:00:34 | Speed: 5.5 MB/s (44 mbit) |
Url: http://www.igrezadecu.rs/games/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.swf | 1,702 | 3.19 GiB |
.jpg | 1,345 | 12.39 MiB |
.png | 339 | 5.55 MiB |
.gif | 18 | 111.3 kiB |
.jpeg | 8 | 76.7 kiB |
Dirs: 2 Ext: 6 | Total: 3,413 | Total: 3.21 GiB |
Date: 2020-01-06 09:05:04 | Time: 00:00:03 | Speed: 3.9 MB/s (31 mbit) |
What should be literally 5 seconds took 32 minutes, and still like 10 directories fail to deliver even the directory index.. Speed test also fails, indeed useless.
Scanned root:
Url: http://dl9.sabadl.xyz/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.mkv | 4,125 | 3.95 TiB |
.mp4 | 1,826 | 312.29 GiB |
.mka | 203 | 19.95 GiB |
.mp3 | 4 | 529.87 MiB |
.ac3 | 1 | 370.41 MiB |
Dirs: 27 Ext: 6 | Total: 6,160 | Total: 4.28 TiB |
Date: 2020-01-05 10:18:45 | Time: 00:32:34 | Speed: Failed |
Url: http://71.176.76.21:3001/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.cbr | 1,595 | 29.98 GiB |
.cbz | 447 | 14.4 GiB |
668 | 11.3 GiB | |
.epub | 92 | 3 GiB |
.rar | 4 | 812.44 MiB |
Dirs: 2,851 Ext: 17 | Total: 5,745 | Total: 60.6 GiB |
Date: 2020-01-05 10:13:54 | Time: 00:01:00 | Speed: 0.4 MB/s (3 mbit) |
Url: http://www.military-today.com/aircraft/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.flv | 87 | 940.11 MiB |
.jpg | 1,803 | 165.18 MiB |
.htm | 300 | 4.22 MiB |
.js | 7 | 260.9 kiB |
.db | 2 | 35 kiB |
Dirs: 5 Ext: 8 | Total: 2,220 | Total: 1.08 GiB |
Date: 2020-01-05 10:00:23 | Time: 00:00:05 | Speed: 18.2 MB/s (146 mbit) |
Url: http://195.154.232.154/drive/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.zip | 1,804 | 387.19 GiB |
.dmg | 1 | 1.9 GiB |
.txt | 1 | 1.2 kiB |
.textclipping | 1 | 384 B |
Dirs: 1 Ext: 4 | Total: 1,807 | Total: 389.09 GiB |
Date: 2020-01-03 07:05:46 | Time: 00:00:01 | Speed: 29.1+ MB/s (233+ mbit) |
Url: https://share.johnnybegood.fr/ | Urls file | |
---|---|---|
Extension (Top 5) | Files | Size |
.iso | 46 | 229.49 GiB |
.zip | 421 | 101.21 GiB |
.mkv | 49 | 32.09 GiB |
.mp4 | 22 | 17.59 GiB |
.exe | 277 | 9.67 GiB |
Dirs: 280 Ext: 74 | Total: 3,183 | Total: 412 GiB |
Date: 2019-12-01 19:58:33 | Time: 00:01:13 | Speed: 30.6+ MB/s (244+ mbit) |
the key here once you have it set up also is make sure to use the --drive-server-side-across-configs flag.
you will still hit your 750G upload limit per 24hrs and you can use service accounts to get around that but that's a lot more complicated (I have 100 set up and can transfer ~75T/24hrs if i ever needed to but that's just excessive).
It's also available for Windows in addition to being standard on pretty much any *nix system.
See here for more detailed setup and use instructions.
Wow good work man!
Just a heads up:
>en_windows_embedded_8.1_industry_pro_with_update_x64_dvd_6052086.iso
Is the wrong size or not complete. In this directory. It's suppose to be 3.7GB if I recall correctly.
There are two emulators that are being developed for the nintendo switch, one is called Yuzu and the other emulator is called Ryujinx. They're in early development and there has been some progress with some commercial games going in-game but nothing more than that at the moment if I recall correctly.
Hi for now, calisuck doesn't support Calibre Web (https://github.com/janeczku/calibre-web).
Only the original calibre server because it provides an API. It's on my roadmap as it is for opendirectories with .opf files.
Sorry for that
No, just wondering. The default Calibre web server is fine for serving ebooks. I found this cool library https://github.com/janeczku/calibre-web that also allows you to add books/manage metadata from the web, which is really nice if you're running it in a headless server.
You should be fine sharing your library with family/friends.
Use /u/ruralcricket's wget-fu with /u/blazeme8's suggestion of the mobile site. I've found luck with changing the number of items displayed in the page via
>>wget -e robots=off -r -nc -np http://50.68.112.25:8080/mobile?num=###
Where ### is at least as large as the number of books listed on the web page.
Okay, now that we have a bunch of books, and assuming we have already installed Calibre, let's get started.
EDIT: Forgot the most important step. Donate to Calibre. This single piece of software is going to be your one-stop-shop for managing your ebook library; Kovid Goyal deserves some financial love for making such an amazing software suite.
"Calibre is meant to be a complete e-library solution. it includes library management, virtual access to book files via content server, format conversions, ebook creation, ebook modifications, news feeds to e-book conversion as well as e-book reader sync features. Calibre is primarily an e-book cataloging program. it manages your e-book collection for you. it is designed around the concept of the logical book, i.e. a single entry in the database that may correspond to e-books in several formats. it also supports conversions from different e-book formats. A graphical interface to the conversion software can be accessed easily by just clicking the "convert e-books" button. Calibre has a modular device driver design that makes adding support for different e-reader devices easy. Syncing supports updating metadata on the device from metadata in the library and the creation of collections on the device based on the tags defined in the library view. if an book has multiple formats available, calibre automatically chooses the best format when uploading to the device. Calibre can automatically fetch news from a number of websites/rss feeds, format the news into a e-book and upload to a connected device. there is support for generating e-books. the e-books include the full versions of the articles, not just the summaries. Calibre has also a built-in e-book viewer that can display all the major e-book formats. Calibre also supports a range of user plugins designed to help with user customization."
Try wget.
You can use a command similar to the following:
wget -r -nH --cut-dirs=2 --no-parent --reject="index.html*" http://mysite.com/dir1/dir2/data
If you install wget.exe manually, you will need to manually add the enclosing folder to your PATH
environment variable.
Consider installing wget via Chocolatey instead.
ok so you need PMS - Plex Media Server follow the setup wizard and associate whatever folders your media is in to it.
once you have your media associating with plex you should be able to view it on your phone/roku/laptop internally by going to plex.tv and clicking launch.
If you want to be able to view it from outside your network (While traveling/at work/away from home) you have to open a port in your router for it.
ill see if I can find you a walkthrough on how to set it up, but you should be able to search /r/PleX for walkthroughs or to ask general help questions.
modern day mod trackers like schismtracker can run them http://schismtracker.org/
its music most of the cases you bump into .mod files in the wild on the net, most common are the 4 channel voice .mod files, they can go upto 8 channels i believe.
XMPlay can play xm and sid files at the very least, if you're on Windows.
For other platforms, GStreamer should have plugins for it.
I also remember MilkyTracker having better quality playback, not sure though.