Can you elaborate on this? Are you referring to <code>youtube-dl-aria</code>, because that's appaerntly obsolete?
Or are you talking about aria2? And if so, how does that relate to YouTube downloading?
Give aria2 a try. You can download a file from multiple sources protocols and connections at the same time. Eg. from both FTP and HTTP at the same time using multiple connections.
It has an option to modify the peer id prefix, and there isn't anywhere close to enough demand/interest to outweigh that.
rTorrent / Transmission / Deluge are the ones you'd be looking at.
Alternatively you could look at something like Aria2, I wouldn't call it a torrent client specifically (really more of a multi downloader) but it does handle torrent duties well & is purely CLI like rTorrent.
jDownloader und pyLoad haben z.B. Web-Interfaces. Ersteres ist aber heftigste Bloatware, pyLoad wird meines Wissens nach schon 'ne Weile nicht mehr weiterentwickelt.
Daher empfehle ich aria2 mit Webui-aria2.
Alternativ auch einfach per ssh auf den Rechner einloggen und mit cURL oder Wget arbeiten.
It is a command line download manager. Let's you do 10 http sessions concurrently in a download, also supports torrents.
Mainly use it as an alternative to curl or wget.
If i can add my own tool to the mix, there is also Media Downloader which is Qt/C++ based GUI frontend to multiple CLI tools. A new version will be out next monday(October 11th, 2021) and it will ship with aria2c extension and this extension can be used for downloading torrents and other files through the GUI.
SFTP, HTTP/HTTPS (multipart) is supported with basic and digest auth. FTP over SSL is not supported.
Internally, the downloads are handled by the fantastic aria2 download utility.
Suggestions in here are a little dated and non-free. You might want to look into Aria2 if you use command line / terminal otherwise you could try a frontend for aria2 like Persepolis. Modern and open source.
axel (for downloading single files over multiple connections)
aria2 (torrents and magnet links on the command line)
plowshare/plowdown (command line tool for downloading files from (e.g.) Zippyshare, Rapidgator, etc)
<code>aria2c</code> has similar functionality.
It's just multiple connections using HTTP(S) Byte-range
, so any number of clients will work. You can even script it using raw curl
and additional headers, and concatenate the result, if curl
doesn't have the functionality already.
I don't know of any such software but you could code this in bash, for example:
# print filenames to the terminal (https://aria2.github.io/) torrent="linux ISO.torrent" aria2c -S "${torrent}" | grep "[0-9]|./" | cut -c7- # store the filenames in an array IFS=$'\n' # allow whitespace in array elements newFilenames=( $( aria2c -S "${torrent}" | grep "[0-9]|./" | cut -c7- ) ) # print the 7th filename including path echo "${newFilenames[6]}" # print the 7th filename NOT including path echo "${newFilenames[6]##/}" # store all files in current directory in another array; maxdepth=0 will ignore subdirectories IFS=$'\n' # allow whitespace in array elements oldFilenames=( $( find * -maxdepth 0 -type f ) ) # rename each element of oldFilenames to newFilenames # make sure each element in array corresponds to each other (oldFilenames[6] -> newFilenames[6])!! # use cp to test, switch to mv if you're confident. Note that this ignores subdirectories. for (( i=0; i<${#newFilenames[@]}; i++ )); do cp -v "${oldFilenames[$i]##/}" "${newFilenames[$i]##*/}"; done
If anyone is willing to run the official Baidu Cloud downloader and scared about spyware, I would suggest running it in a VM. I used to do that until I got lazy and just ran it on my main OS :D
Another alternative is using the open source Aria2 downloader and installing a plugin in your browser that exports Baidu Cloud download links straight to the downloader.
https://aria2.github.io/ https://github.com/acgotaku/BaiduExporter
There is Windows native aria2c.exe, no need for WSL.
That's also what I wrote in the description. -V is actually sub-optimal cause it verifies the files for nothing so actually forcing aria2 to ignore the verification step would work better. Or just use --force-save so the torrent download information (.aria2 file) doesn't get deleted once the torrent finishes downloading.
I did somewhat fix the second issue I brought up by myself already, so just confused about the possibility of adding more download jobs without quitting the current download first or spawning an another aria2 instance in another terminal.
--bg brought no hits on official documentation
Copypasted from my reply to a similar post about a month ago:
I’d do it with a small Batch script and aria2c.
Untested code whipped up in a minute:
SET /A COUNT=1 :LOOP aria2c --header="Cookie: {cookie goes here}" "https://shittydrmtextbooks.us/133769420/page%COUNT%.png" SET /A COUNT=%COUNT%+1 GOTO LOOP
Put the entire cookie in the {}
placeholder without the braces. You can get it on the network tool of inspect element in the request headers. Don’t forget to escape all "
s with a \
otherwise it won’t parse the request correctly.* Obviously also replace the URL with the actual path to the pages, putting %COUNT%
in the place of the number. You might need to chain three or more loops together** for pages 0001–0009, 0010–0099, 0100–0999 etc. because the Console Host won’t pad out the zeroes for you (make the first loop /page000%COUNT%.png
, the second /page00%COUNT%.png
and so forth). When the downloads stop working, it means that either your cookie has expired and you’ll have to edit the new one in (and change the initial value of COUNT
to pick up from where it stopped) or you’ve reached the end of the textbook.
*Some sites require more than just the cookie to grant the files. If this isn’t working, copy all request headers and put each one in a separate --header=""
argument, such as aria2c --header="Cookie: {cookie goes here}" --header="user-agent: {user agent goes here}"
.
**If statements on Batch work as follows:
IF %COUNT% EQU 100 GOTO OTHERLOOP
Even faster than -N
is using aria2c as an external downloader with the command --downloader aria2c
.
​
I'm using Aria2 on my server: https://aria2.github.io/ and add the GUI from here: https://github.com/mayswind/AriaNg/releases Actually this is working for a long time and I'm pretty happy with this combination...
I use <code>aria2c</code> for most of my download needs, can easily rsync the half-finished file to another machine and resume it there.
Of course I'd usually start the download on the server regardless, so I might be the wrong person to ask.
I’d do it with a small Batch script and aria2c.
Untested code whipped up in 2 minutes:
SET /A COUNT=1 :LOOP aria2c --header="Cookie: {cookie goes here}" "https://shittydrmtextbooks.us/133769420/page%COUNT%.png" SET /A COUNT=%COUNT%+1 GOTO LOOP :END
Put the entire cookie in the {}
placeholder without the braces. You can get it on the network tool of inspect element in the request headers. Don’t forget to escape all "
s with a \
otherwise it won’t parse the request correctly. Obviously also replace the URL with the actual path the the pages, putting %COUNT%
in the place of the number. You might need to chain three or more loops together for pages 0001–0009, 0010–0099, 0100–0999 etc. because the Console Host won’t pad out the zeroes for you (make the first loop /page000%COUNT%.png
, the second /page00%COUNT%.png
and so forth)
So u/sspark is correct, but you may have some luck using a multi-threaded down loader. You basically download the file in multiple pieces and stitch it back together in the end.
I used this when downloading large files from a server I had hosted in France. My download speeds would be okay when I started downloading a file and slow to a crawl over time. With 4 threads running I managed to get much better speeds.
Okay, here's the list of caveats
> but I usually can only get up to 2MB/s when downloading with youtube-dl (My connection generally maxes out at 5MB/s).
Maybe try with external downloader aria2?
Might be available in your repositories.
I pass this to youtube-dl:
--external-downloader aria2c --external-downloader-args "-x 10 -s 10 -j 10 -k 1M --log-level=info --file-allocation=none"
There's also this thing called aria2. Low profile(never saw mine climb over 8MB ram and the website says max is 9MB). It handles torrents too. I personally prefer this as it has a text file argument and said text file can have several links with names, directories (if you want to sort them) and referer. I picked this as its a lot easier to just generate a text file, grab them from the downloads folder and queue them up in a single bat file.
>If the upload speed of the server sucks, then having parallel downloads won't help.
Yes it will, because any file host at scale is limiting per connection. I use aria2 to download files much faster in some cases.
Likely not going to be the case for you, but the issue I had with downloads failing for me recently was due to the firewall on my router. I have an Ubiquiti UniFi Dream Machine Pro. It blocked anything which uses aria2 which Cubase download manager uses (also Native Access) as it identifies that as malware by default. I had to override that, and downloads then worked for me.
If you haven't made any changes to your networking hardware, then maybe try disabling any anti-virus/malware/firewall software on your mac? Maybe something is triggering that detection and you have to figure out how to override it.
If some items are downloading, but most aren't, I doubt this will be it though since I would expect it to fail 100% of the time if this was the case (again, that's how it manifested for me).
It's also worth looking at the logs for the downloader. They might give you a better clue as to what is happening. I'm not sure where they are on Mac, but on Windows they are at: %LocalAppData%\Steinberg Download Assistant\logs
Aria2 has option to download torrents, you can write simple script and zip it together, then your freinds will simply unpack and doubleclick it. Picotorrent is also great simple client if you want gui but it's windows only.
You can use aria2.
the flag is -S or --show-files
​
here is what it looks like in practace https://paste.gg/p/anonymous/24e7a661607d445486a80307145f8a00/files/85ce10fef9954dac9043ba94995fba58/raw
I just checked but it seems it's not exactly how I would have done this.
I do prefer to build a raw list of url then call the JSON RPC of aria2c and add the url with the target directory in the queue.
It’s saying “We couldn’t find files\aria2c.exe in current directory.
You can download aria2 from: https://aria2.github.io/“
Added it to system PATH but it’s saying “‘aria2’ is not recognized as an internal or external command, operable program or batch file.”
How to add Aria2 to path?
I downloaded these a while ago. Can't remember exactly how I got the URLs but I might have just pasted it together from the CSV file. I downloaded them using aria2 something like so: aria2c -i urls1.txt -j 5 -d files
where "urls1.txt" is a file with a list of URLs, 5 is the number of concurrent downloads and "files" is the output directory.
Do you specifically need a torrent client that is cli based? Or you just need to control a running torrent client via cli?
You should be able to add torrents to running Deluge and Transmission instances via cli by running their respective cli programs e.g.
transmission-remote -h
or
deluge-console help
Or if you need a cli program specifically you can look into aria2.
I think there are 2 aspects you can tackle on for faster downloads:
Given these, I think the best solution is writing a Swift script that crawls Pornhub and spawns an asynchronous download task for each video link found. Alternatively, you can use the same crawler to feed all video links into a plain text file, and have aria2c
read it progressively with the <code>--deferred-input</code> option.
There's also <code>aria2</code>. Less known but very nice, has generally gotten new/interesting features faster than wget. I started using it a long time ago because some FOSS software decided to use metalink for something and wget didn't support it at the time (though it may now). Found aria2 when looking for alternatives and ended up keeping it because it was nicer to use.
The best option to download large files is download through a cli app called aria2. Pretty easy to install (brew formula) and use. You can split the file and download through parallel connections.
You can get more info here: https://aria2.github.io/
Btw: sorry for my crappy english, I'm Chilean.
I've been using Aria2 for years as a command line downloader but recently learned it supports torrents. Not sure if it's in CentOS repos as I generally install it on Fedora but it's def in Fedora repos.
Aria2 is a great torrent client if you need to operate it in a download-and-quit manner from a script. Pretty much as easy as wget or curl.
Put the torrent file at a fixed, known URL. Update it at the same time every day or whatever, clients can use a scheduled task to kick off the download with the torrent from the URL.
Probably have zero issues installing it on WSL2 if that's the way your org leans, should be in just about every Linux distro's repos, less sure about Macs but can't imagine it giving much trouble.
I'm not sure where you are wanting to take screenshots at, but Foss Browser on F-droid allows you to take full webpage screen shots. Since you didn't say exactly what your use case is I'm guessing it's more than just webpages.
YDL with aria2 is a good solution if you don't want to install anything.
Just use the flag and you're set.
--external-downloader=aria2c
It looks like it replaces whatever download mechanism the package manager uses with aria2 https://aria2.github.io/ . I'd be curious to see speed comparisons between this mechanism and whatever homebrew or apt use natively.
Cool! I tried out the CLI and had two issues, but was eventually able to get a nice fast parallel download:
Also, have you looked at aria2c?
Pretty easy to do with a little bit of python to extract all links by iterating through all the 241 pages, and then using any download manager to get the files individually through the links. I have used aria2 before for these type of large downloads.
​
Reply if required, I'll hack together a simple script for you.
​
​
I presume you're looking for the size of the data referenced by the torrent, rather than the .torrent file itself.
Following on from what /u/japzone said, if the size parameter isn't specified in the magnet link, you'd likely have to write a script to do this.
aria2 can be used to retrieve a .torrent from a magnet link, using a command like: aria2c --follow-torrent=false --bt-metadata-only=true --bt-save-metadata=true --enable-dht=true --dht-listen-port=12345 --bt-stop-timeout=300 --bt-tracker-connect-timeout=300 [magnet link here]
. If the magnet doesn't contain trackers, you can specify some via the --bt-tracker
flag, or hope that aria2 has enough DHT nodes to find it.
Once you have the .torrent file, I'm not sure if there's a handy tool which can report the total size of it, but if can get a BEncode library for your script, you can just parse it and total up the file sizes.
Aria2c with --auto-file-renaming option
https://aria2.github.io/manual/en/html/aria2c.html#cmdoption--auto-file-renaming
"Rename file name if the same file already exists. This option works only in HTTP(S)/FTP download. The new file name has a dot and a number(1..9999) appended after the name, but before the file extension, if any. Default: true"
Doable but I think it'll affect the segments downloaded, maybe not.
Do you have any direct link to proxy-list, else dependency will increase, still "$(python -c "import urllib, sys; print urllib.quote(sys.argv[1])
will work
If you have Linux experience or don't mind dealing with command-line utilities, try aria2.
The initial setup takes some time, but it's pretty much set-and-forget. I find it easier and cleaner to use than most download managers because of its daemon (service) nature.
There are a few GUIs. All of them are lightweight and available online as a website without need of download/install.
Link: aria2.
Looks like curl/wget replacement with bittorrent support, not a youtube-dl competitor like I initially thought. Thanks for the tip!
Check Aria2, https://aria2.github.io/.
"aria2 is a lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces."
I've been using it, multi-connection mode, to scrape webs, download huge lists of urls, etc, etc...
I am avoiding S3 since access speeds are generally much slower there, looking into EBS based solutions right now. I did not know about aria2, thanks for mentioning it, I will certainly check it out.