If you use curl on the command line, it works like that too - an @ symbol indicates a file to be uploaded.
From man curl: >If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin.
It's also in the actual documentation in PHP, both in the description and the examples.
No, from http://curl.haxx.se/docs/faq.html#What_is_cURL
>cURL is the name of the project. The name is a play on 'Client for URLs', originally with URL spelled in uppercase to make it obvious it deals with URLs. The fact it can also be pronounced 'see URL' also helped, it works as an abbreviation for "Client URL Request Library" or why not the recursive version: "Curl URL Request Library".
Nope,
> curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP...
I gotta be honest, I'm really skeptical of someone who is apparently skilled enough to teach students, yet doesn't know the answer to this question.
At any rate, Reddit has a REST API. So you simply need to use one of the many HTTP libraries for C, such as libcurl. From there, it's just a matter of making the right HTTP calls and interpreting the responses. See the examples on the libcurl site.
Experience with HTTP is a prerequisite.
True, but that's behavior that the PHP wrapper of libcurl is imitating, not behavior inherent to libcurl. And the behavior in the command-line is somewhat more sane, since:
--data
arguments, you'll only run into issues if the first field name begins with @ (and if that's the case you already should be manually URL encoding it anyway)--data-urlencode
and --form-string
arguments exist if you have to deal with arbitrary valuesFor PHP it's idiotic, because the only way to send a field value in a multipart form that begins with @ is to write it to a temporary file on disk. The actual libcurl API is much richer and requires behavior like reading from a file to be explicitly set rather than having certain magic values.
I'd recommend not using one of those languages unless you have to. This could be more easily done with a higher level language.
At any rate, libcurl is probably the most commonly used library for making an HTTP request. It'll work for C and C++.
Do not try to do this with assembly unless you like torturing yourself (in fact, don't try and program anything meaningful in assembly).
Curl?
edit: a little confusion over my response, so allow me to clarify
Edit2: There might be a project closer to 20, then curl but it was the first project that spang to mind that is close to 20!
My tutorial on how to set it up for windows:
Download curl: http://curl.haxx.se/download.html (make sure you download the ssl version"
Once download open the ZIP file, and go to start > computer > c:/ and drag and drop the exe in there.
Download xampp: https://www.apachefriends.org/download.html
Run through the instalation, once installed open:
> start > computer > c:/ > xampp > htdocs
and edit the index file, place the bitcoin address your want to use as your payout.
Now (I can't teach you how to do this, but forward your port 80 to your machine that is running xampp.)
Now that is done, go to start > cmd and open it.
Type:
cd c:/ curl -k --header "Accept: application/json; indent=4" --data "bitcoin_address=replacemewithyourBTCaddress" https://getaddr.bitnodes.io/api/v1/nodes/replacemewithyouripaddrss-8333/
Replace the BTC and ip address above. and copy and paste that in to the CMD windows and press enter
You just need to POST the correct data.
Look for info about curl
. A Google search you might want to try curl post form
In essence,
curl --data "val1=value1&val2=value2" https://example.com/page.php
There's a curl extension for PHP. At the bottom of this page, you'll find an example, http://curl.haxx.se/libcurl/php/examples/simplepost.html
If the list of names/emails are in a file, just read the file and execute the code above for it.
If it's just for spamming, you can create an email by appending something or increasing a value,
$email_pre = test; $email_post = 1; while(True) { $email = $email_pre . $email_post; //curl code //to post data $email_post++; }
Have fun.
Yes, it is possible. You will want to look into the Riot Games API, a library for making HTTP requests (such as libcurl), and a JSON library to parse the responses (there are a number listed at json.org).
This sentence is very misleading:
> Previously, OpenSSL was the sole widely used open source solution for encrypting traffic sent to and from Web pages on millions of servers.
OpenSSL is not the sole anything. There's GnuTLS -- created because OpenSSL is not GPL-compatible -- and PolarSSL to name two of the more popular options, and Wikipedia lists another dozen. While it's not commonly done, you can use e.g. GnuTLS with Apache through mod_gnutls. The popular library libcurl can be built against no less than nine different TLS libraries.
The recent forks are really only about having alternatives that are API and ABI compatible with OpenSSL, which is an important distinction, because it allows them to be used as direct replacements without having to modify the client code.
The CRL isn't checked automatically with curl/OpenSSL.
http://curl.haxx.se/docs/ssl-compared.html
> CRL: CRL means "Certificate Revocation List" and is used to check to see if any certificates in the server's chain have been revoked for some reason. If automatic, then the engine will automatically download a CRL and use it to evaluate the trust of the server's certificate chain when performing the TLS handshake. If manual, then the engine will not automatically use a CRL, but you can provide one that has been downloaded separately by using the CURLOPT_CRL option. If no, then the CURLOPT_CRL option will be ignored.
You could use VBS and a few commandline utilities, if you were so inclined.
curl is the name of the library typically used to retrieve website data as text into a variable that you can then parse using server-side languages (PHP, Python, Ruby, Server-Side JavaScript). All of those languages have curl built into them in some way.
When building bots and scrapers, it is important to make sure that your software treats the website like a regular user. So don't try to load every page of the website you are scraping in half a second. 1 page every 5 seconds is probably OK, though every 10 seconds would be even better.
Nope. Quote:
> We pronounce curl with an initial k sound. It rhymes with words like girl and earl. This is a short WAV file to help you:
> In my opinion all libraries should be namespaced under the author name e.g. lenessia/foo-bar.
That’s … unusual. I can’t think of a library that gives author credit in a name component. How many authors contributed to a library like, e.g., curl? How many to libxml2? In fact, the relevance of individual contributors decreases as a library becomes popular and widely used. Adapting the namespace or prefix or URL on a server or whatever to reflect the current author situation would cause naming to fluctuate a lot. Not to mention that in the already ego-fueled OSS ecosphere[*], having the credit to figure that prominently gives rise to undesirable incentives like hogging projects and inflating one’s own contributions (even more) by posting trivial patches. I can’t imagine that this could positively affect the SNR of OSS development.
[*] Which I don’t think is bad per se, one just has to be aware of the consequences.
Something much more subtle is going on here. They're detecting the User-Agent and adjusting the HTML returned accordingly. If you grab the source with curl then the tracks are filled in correctly (or just use Curl as your User-Agent string). Try the following code. It worked for me just now.
public class Scraper { public static void main(String[] args) throws IOException { String url = "http://www.hiphopearly.com/tracks"; Document doc = Jsoup.connect(url).header("User-Agent", "curl").get(); Elements tracks = doc.select("div.track-listing div.track"); for (Element track : tracks) { System.out.println("Artist: " + track.select("span.artist").text()); System.out.println("Title: " + track.select("span.title").text()); System.out.println(); } } }
The website that the Pushbullet site is taking you to is for an older version of cURL (with SSL), 7.44. Looks like the current version hosted on that site is 7.45, and they just deleted the old one, with no redirect. The Pushbullet devs probably would have been better served linking to the actual cURL download page, even if you'd have to poke around to find the third-party Windows binaries.
I emailed the Pushbullet devs about it, so hopefully they'll either update the link or point to a more static download.
I have no idea how to do that from a browser, because it's usually very easy to do it using <code>curl</code> from the command-line. From the curl manual:
>DOWNLOAD TO A FILE
> Get a web page and store in a local file with a specific name:
> curl -o thatpage.html http://www.netscape.com/
Ok so it turns out my complained that with curl_easy the transfer is driven by curl is actually one of the reasons there is curl_multi that lets the application drive the transfer. It's actually the very first objective for multi http://curl.haxx.se/libcurl/c/libcurl-multi.html
so I guess this should solve my problem.
I don't understand what you're trying to say. The kernel knows exactly which threads to unblock when using any synchronization primitive except for condition variables. Even with condition variables it's usually going to do the right thing. Even if we ignore the entire overhead of sockets (which is already going to be orders of magnitude higher than synchronization primitives because that's what going to be what the kernel uses underneath anyway), you're still looking at I/O which is going over a slow non-deterministic data-source and, in your case, over the internet which is also an unreliable transport to boot.
It really sounds more like you want is an event loop. You can use libuv or boost io_service for that (io_service is more limited I think). libuv can integrate with curl: http://curl.haxx.se/libcurl/c/multi-uv.html.
I have 3 pieces of advice for you - up to you of course if you want to listen to me:
>curl -k, --insecure > (SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL >connections are attempted to be made secure by using the CA certificate bundle > installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used. > See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
if you don't want to ignore the ssl certificate warnings, you should import the CA.cert into your OS
Nothing is "un-doable" in C that can be done in any other Turing-complete programming language. You could use libcurl to grab the HTML and libtidy to walk the DOM in search of relevant elements.
Of course, you could just as well forsake the prior work in this area and use raw sockets and string manipulation if you had the time and inclination.
You can't do it through any file manager but im 99% certain curl can do this. You need to know something about the upload destination though. Is it a GET or a POST request? What kind of structure does the url have?
This would satisfy a generic post request:
curl -T -d "param_one_name=http://url.com/image/1.jpg" "http://destination.com/upload"
I'm honestly just guessing from memory. I don't have any way to test this. The cURL manual would be super helpful if you choose to take this route. http://curl.haxx.se/docs/manual.html
> Example: curl 7.35 (I believe) was just released. Debian will probably have this in January of next year.
http://curl.haxx.se/ >The most recent stable version of curl is version 7.36.0, released on 26th of March 2014.
# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux unstable (sid) Release: unstable Codename: sid
# curl -V curl 7.36.0 (x86_64-pc-linux-gnu) libcurl/7.36.0 OpenSSL/1.0.1g zlib/1.2.8 libidn/1.28 libssh2/1.4.3 librtmp/2.3
You are too prejudiced about Debian :)
It could be that your OS has never installed SSL certs, or it could be that they are outdated after we replaced ours, along with the entire internet, due to the heartbleed bug.
Try installing them:
sudo apt-get install ca-certificates
The command on your OS might be different. Perhaps:
sudo yum install ca-certificates
If nothing works, perhaps try downloading the bundle directly -- http://curl.haxx.se/ca/cacert.pem, then adding this line specifying where the certs are.
For command line curl:
export CURL_CA_BUNDLE="/path/to/cacert.pem"
For php curl:
curl.cainfo=<path-to>cacert.pem
http://curl.haxx.se/docs/faq.html#How_do_I_tell_curl_to_follow_HTT
"Curl does not follow so-called redirects by default. The Location: header that informs the client about this is only interpreted if you're using the -L/--location option. As in:
curl -L http://redirector.com
Not all redirects are HTTP ones, see 4.14 "
wget does follow redirects (up to 20 by default) so that would start to explain the differences you're seeing.
You also might find this useful http://curl.haxx.se/libcurl/c/libcurl-errors.html
3L on the Low pay and a second rung city. 25L on the high pay in a first grade city. (I know 25L is middle income in MNCs today, but high outside).
It was an interesting experiment but you had to log in constantly for your profile to stay on top of their search results since they are by default ordered on last login. So I automated that using curl to login once every 40-50 mins and stayed on the first page mostly. These days however I think they use session cookies as well, haven't checked. That and the messages were streaming in. Chat requests, women writing to me "accidentally," apologizing on no reply saying a friend must have written in, and so on.
College gives you a load of free time. This ran for a month or so and I got fed up. Quite a bunch of nice chicks though. ;)
Also, if you are a girl and reading this, please, please, please don't send in pics with those huuge face eating glasses on. That's not a photo of you, that's a photo of your glasses.
Disclaimer: this experiment was not scientific and doesn't claim to be one. I was curious and I did not lead anyone on.
InfoSec pro-tip: preview unknown suspicious links with curl before following them in e-mail, or IM, clients. If you see obfuscated, heavily encoded javascript, there is probably some bad juju on there.
All linux machines come with a tool called "curl" that access the website ( in theory simulating a click) , and displays what ever the site returns.
If the output / resultant page is not needed to you, just pipe the output to /dev/null.
example : curl http://www.google.com >/dev/null
You can download curl for windows or any other OS here.
If you know all the links that you need to click, just put the whole link in a text file and write a batch file to have curl cycle through all of them.
check if curl works for you and let me know if you need help writing a script.
You have to download curl.exe, a link is given in the post: http://curl.haxx.se/
go to Download, then scroll to Win32/Win64/MacOS - whatever you're using, download that (probably the newest version), then execute the commands in a command prompt or shell in whatever directory you downloaded to. If this is going over your head then it's probably not something to mess with, maybe ask a friend who may understand.
Sure, I have little experience with Java and none with C++ but there's no reason you couldn't do the same thing with either of them. JSON is a language-agnostic format for encapsulating data; it originated with JavaScript but can be generated and/or consumed from just about anywhere now. (If you're unsure about JSON but like XML, the MediaWiki API can also output in that format... so you're not locked into JSON for this.)
What you'll need to do is figure out how to send HTTP requests from your language of choice. My guess is you'll use something like libcurl (available for C++ or Java) to do the heavy lifting. Then you'll need to parse the response JSON or XML, extract/transform your data, and save to the filesystem. Hopefully the last couple steps are something you're already familiar with.
Oh wow. You're right.
For curl this looks like the right practice then - http://curl.haxx.se/docs/faq.html#How_do_I_keep_user_names_and_pas
This means you don't have invoke curl with the password made visible.
That batch files from that github link are specifically for compiling with Visual Studio. It will not work with another toolchain. You could probably modify their scripts to work with another toolchain, but at that point you might be better of just building it yourself from scratch.
You could probably also try looking farther down on their downloads page and seeing if they have a pre-built version listed for your compiler: http://curl.haxx.se/download.html#Win32
What is the proper way to supply cURL with an updatable certificate authority file? Currently the best way I've found is to download from http://curl.haxx.se/ca/cacert.pem and cache that file to be used on future cURL requests of any kind, including sending a timestamp to check for a 304 or 200 status to re-update the file or not. This is mostly for windows/mac.
I said this:
> Look at the steady stream of vulnerabilities in the library Pacman uses to download upgrades: > > http://curl.haxx.se/docs/security.html
and then pointed out how easy it would be to deploy this attack from the mirror infrastructure against many users when the attack surface was brushed away as inconsequential:
> The mirrors hosting distribution packages are shared access machines and would be high priority targets to do something like this.
It has nothing to do with package signing.
> For example, it's not clear if some of these guides are for Linux or for Windows, or both. You say things like "install curl" with no further link or explanation. I got an install curl and it's a single exe that seemingly does nothing, doesn't install anything anywhere, you click on it and it opens a command prompt for an instant, and that's it. Huh?
Well the wiki page does link to cURL's homepage at http://curl.haxx.se/
It sounds like you're not familiar with the command-line interface which is unfortunate as JoinMarket is a command line program at the moment. Obviously in the future we'll build GUIs on top, but right now some simple knowledge of CLI is needed. It shouldn't be a problem for you as you're a highly technical person, it's quite easy to figure out. Did you ever use ping to check whether a host is reachable? It's the same kind of thing.
cURL is a command line program which doesn't have a GUI, which is why on windows you momentarily see command prompt when you double-click it. You need to move the exe somewhere in your $path$ variable, for example C:\Windows\ so that when you type "curl" on command prompt it will execute it.
> Then it talks about changing bitcoin.conf, where is that file? I can't find it on my Bitcoin Core install on Win7.
That wiki links to the bitcoin wiki page https://en.bitcoin.it/wiki/Running_Bitcoin#Bitcoin.conf_Configuration_File Although now I've gone ahead and put the link everywhere bitcoin.conf is mentioned.
> Are all those steps necessary for a windows user before they can send money to joinmarket from the bitcoin core? Unclear. It's also not clear if the guide is written for people who want to be "makers" or "takers" or both.
Yes. You need bitcoin core before you can send money from bitcoin core using joinmarket.
The guide is written for every user, takers and makers, of JoinMarket right now (but not for the Electrum plugin, Bitcoin-qt or other wallet interfaces when they get created)
Take a look at this: link
Click one of the first links to download the source. Unpack and follow the readme (which should explain how to build a lib file). If you're doing this for windows you won't need libcurl.a, that's part of the library built for Linux afaik.
If you have the source there should be a Visual Studio project included (mostly included in these kinds of packages).
Hope you can figure it out!
Curl maintains its own comparison table, which includes:
> snarf, wget, pavuk, fget, fetch, lftp, aria2, HTTrack
Doesn't seem to be updated regularly though, that aria2 version is from 1.2014. It also contains no non-curl features, such as torrent in aria.
Get a copy of curl for windows: http://curl.haxx.se/download.html (scroll all the way to the bottom for the win32 and win64 downloads)
Your script then will be:
curl.exe "http://192.168.2.1/cgi/login.cgi?Username=admin&Password=password" curl.exe "http://192.168.2.1/cgi/reset.cgi?back=Reset&reset=true"
This is what I have:https://github.com/s8l/sidebartwitch
It will make something that looks like this at the bottom of your sidebar:http://i.imgur.com/V4ARLSb.png
It allows an arbitrary number of streams and you don't have to set up any formatting the first time it runs.
If you want to change the title of this section, modify this line:
title=">**Twitch streams status**"
Also, if you are on windows you will need to install curl. Check this: http://curl.haxx.se/download.html and after you install, change this line:
curl_location="curl"
to the install directory, it will look something like:
curl_location="c://curl"
You might be better off writing this in objective-c / swift, or if you're willing to step outside of Xcode, maybe even a scripting language like python.
Just as a comparison, here's how you'd do it in objective-c:
NSURL *url = [NSURL URLWithString:@"https://www.google.com/"]; NSString *string = [NSString stringWithContentsOfURL:url encoding:NSUTF8StringEncoding error:nil];
Doing this in C++ is a pretty involved process, you'll have to use a library, both the C and C++ standard libraries are networking agnostic. Assuming you are already pretty knowledgable in c++, I would use libcurl, it comes with your mac so you won't have to download anything. It's actually a C library so some of the code can look pretty old school. It's a call back based API. Here's the full documentation: http://curl.haxx.se/libcurl/c/
Some sample code to get you started. Hell you can just copy and paste this in and you'll have basic url downloading sorted, just be sure to link to libcurl.dylib
#include <string> #include <curl/curl.h>
// call back function size_t write_string(char ptr, size_t size, size_t nmemb, void *data) { std::string *str = static_cast<std::string>(data); size_t len = size * nmemb; str->append(ptr, len); return len; }
std::string download_url(const char *url) { std::string str;
CURL *curl = curl_easy_init(); curl_easy_setopt(curl, CURLOPT_URL, url); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_string); // write data callback curl_easy_setopt(curl, CURLOPT_WRITEDATA, &str); // pointer you want fowarded to your callback function CURLcode res = curl_easy_perform(curl); curl_easy_cleanup(curl); // cleanup
return str; }
Now if you want to get more hardcore about it, you can use POSIX sockets. That's about as close to no libraries it gets.
My guess is that the website checks the User-Agent header field of the request. The reply to the request is changed depending on the User-Agent the client submits. Curl will submit a User-Agent which identifies the request coming from curl; Firefox will submit a User-Agent which identifies the request as coming from Firefox, etc.
You can even change what User-Agent is submitted to test the effect. See here for more information:
http://dev.minetest.net/CMake_Options
Seems to have an option for windows+curl.
"CURL_DLL - Only on Windows; path to libcurl.dll"
Figure you could grab it from here: http://curl.haxx.se/download.html
If you want to perform a web request you'd have to implement the HTTP protocol, or at least the client side of it. The examples on the page you linked are just very basic socket examples, they don't implement any kind of protocol, which would involve a lot more work. For very simple requests it wouldn't be very hard to write a HTTP client by hand, but for anything serious you should forget about that and just use an existing library like libcurl.
But of course, if you really want to get things working in a hurry then you can always use dynamic loading of the dll and not worry about linking it at compile time.
Use the precompiled binary on the site and use loadlibrary to get the calling addresses.
of course a little googling shows someone has already done the hardwork for you, scroll all the way down (yes the site's in Chinese, but the code is in English).
Got mine from the main website: http://curl.haxx.se/latest.cgi?curl=win64-ssl-sspi
Also, since there's no real easy way to delay execution in a batch script, the pings you were seeing are just my way of pausing for 30 seconds. They should be there.
Sorry, no, they are not encrypted. Just double checked. What they are, is base64 encoded.
I checked it by setting a variable, requesting it with curl and then dumping the cookie to a file, then decoding the dumped base64 string with ruby. Like this:
def index @employees = Employee.all
session[:obvious_hole] = "Yo, yo! Mark!"
respond_to do |format| format.html # index.html.erb format.json { render :json => @employees } end end
curl localhost:3001/employees --cookie-jar cookies mark@mark-desktop ~$ cat cookies # Netscape HTTP Cookie File # http://curl.haxx.se/rfc/cookie_spec.html # This file was generated by libcurl! Edit at your own risk.
#HttpOnly_localhost FALSE / FALSE 0 _testblah_session BAh7CCIQX2NzcmZfdG9rZW4iMTNSZHpYZCtVZnZieWtlT1hKR0VYcWhZbXo1VHBlTnlrRDBEbmxTb2lZZUU9Ig9zZXNzaW9uX2lkIiUyNTMyNmM3OGQyYTI5OTA5Y2MyNmEzMjNiNTA0YWM1MiIRb2J2aW91c19ob2xlIhJZbywgeW8hIE1hcmsh--9e0913a81ae8d50917adb8fb4712b35c8f40cce2
$ irb ree-1.8.7-2012.01 :004 > Base64.decode64("BAh7CCIQX2NzcmZfdG9rZW4iMTNSZHpYZCtVZnZieWtlT1hKR0VYcWhZbXo1VHBlTnlrRDBEbmxTb2lZZUU9Ig9zZXNzaW9uX2lkIiUyNTMyNmM3OGQyYTI5OTA5Y2MyNmEzMjNiNTA0YWM1MiIRb2J2aW91c19ob2xlIhJZbywgeW8hIE1hcmsh--9e0913a81ae8d50917adb8fb4712b35c8f40cce2") => "\004\b{\b\"\020_csrf_token\"13RdzXd+UfvbykeOXJGEXqhYmz5TpeNykD0DnlSoiYeE=\"\017session_id\"%25326c78d2a29909cc26a323b504ac52\"\021obvious_hole\"\022Yo, yo! Mark!\365\355=\327v\274\325\247\274w\235=\327\266\235o\307\333\343\275vo~\\361\3764q\307\266"
> Curl is not a web site mirroring program. Curl is not a wget clone. That is a common misconception. Never, during curl's development, have we intended curl to replace wget or compete on its market. Curl is targeted at single-shot file transfers.
http://curl.haxx.se/docs/faq.html#What_is_curl_not
Just throwing that out there.
There is: http://curl.haxx.se/libcurl/using/curl-config.html
Based on that, try something like: gcc `curl-config --cflags` ${CFLAGS} -o example example.c `curl-config --libs` ${LDFLAGS}
You can toggle off SSL check with CURLOPT_SSL_VERIFYPEER => false
. It's not recommended, though.
Recommendation from the source:
>Please everyone, stop setting CURLOPT_SSL_VERIFYPEER to false or 0. If your PHP installation doesn't have an up-to-date CA root certificate bundle, download the one at the curl website and save it on your server:
>
>
>
>http://curl.haxx.se/docs/caextract.html
>
>
>
>Then set a path to it in your php.ini file, e.g. on Windows:
>
>
>
>curl.cainfo=c:\php\cacert.pem
>
>
>
>Turning off CURLOPT_SSL_VERIFYPEER allows man in the middle (MITM) attacks, which you don't want!
​
Source(s): One
I'm getting a timeout when trying to install the server
> cURL error 28: Operation timed out after 30000 milliseconds with 106608252 out of 305964280 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
no extension that I know of, but if you have curl installed, you can do the following:
-o myFileName.whatever
to the end.this will give you a command line that will have a copy of the session cookie, meaning you're still authenticated to the server when you replay it later.
if you're on a unix like system like linux, you can plop this into crontab; if you're on windows I'm sure something similar exists.
Wondering if you can help me.
Using nginx proxy mananger and, like you, I have Nextcloud and Onlyoffice setup as subdomains.
However, when I add the onlyoffice url in Nextcloud app, I get the following error:
HealthcheckRequest on check error: cURL error 6: Could not resolve host: OnlyOfficeDocumentServer.mystupid.domain (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
Did you add any additional commands in the advanced section of the proy manager?
When I try to open a file using Collabora in Nextcloud, NC log puts out:
Error PHP Undefined offset: 0 at /config/www/nextcloud/lib/private/AppFramework/Http.php#150 2019-03-21T12:20:17+0300
Error PHP Cannot declare class GuzzleHttp\Handler\CurlFactory, because the name is already in use at /config/www/nextcloud/3rdparty/guzzlehttp/guzzle/src/Handler/CurlFactory.php#15 2019-03-21T12:20:17+0300
Error richdocuments GuzzleHttp\Exception\ConnectException: cURL error 28: Connection timed out after 5001 milliseconds (see
<code>http://curl.haxx.se/libcurl/c/libcurl-errors.html</code>) 2019-03-21T12:20:17+0300
Some forum entries on the internet reveals that these errors happen when there is a problem with SSL certs.
> ['Authorization' => 'Bearer ' . $accessToken]
Thanks for your response. I'm still receiving the same error message: Fatal error: Uncaught exception 'GuzzleHttp\Exception\RequestException' with message 'cURL error 3: malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)' in /home/ubuntu/workspace/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 187
Try downloading this cert file
http://curl.haxx.se/ca/cacert.pem
And then specifying it in the pip command
pip --cert cacert.pem install certifi
Make sure your command prompt is in the directory the cert is.
Well I know the session id is getting written to the file the first time I tried running the script this was written to the file (not sure what the 2 falses mean):
"# Netscape HTTP Cookie File
127.0.0.1 FALSE / FALSE 0 PHPSESSID 74as3oac9g5dnbbf31adgib9v7 "
I have tried clearing this file and a new php session id gets written next time i run the script. Thank you by the way.
I'd love this to work.
I got the cacert.pem
locally but still I get the same error:
curl -v --cacert ./cacert.pem https://example.com/
* About to connect() to example.com port 443 * Trying 88.99.111.222... connected * Connected to example.com (88.99.111.222) port 443 * successfully set certificate verify locations: * CAfile: ./cacert.pem CApath: none * SSLv2, Client hello (1): SSLv3, TLS handshake, Server hello (2): SSLv3, TLS handshake, CERT (11): SSLv3, TLS alert, Server hello (2): SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
Similar problems here. My server seems to be running fine with requests made via the browser but I am still trying to find out if it responds correctly when sending invalid requests (as specified in the CS50 spec) to provoke a "Bad Request" error from my server.
>Aside from that, there is no way to test method (GET) and HTTP request (HTTP/1.1).
I figured that one out. Take a look at curl's man page: http://curl.haxx.se/docs/manpage.html You can choose a ton of options (--option) in your curl request (the default is method GET and HTTP version 1.1). You can force it to use a different method (e.g., --request POST) or a different version (--http1.0).
But I still do not know how to send bad request lines... curl seems to simply not submit them to the server. Any ideas? Is there a way to send a complete HTTP request (i.e. "GET /hello.php\ü& HTTP/1.1" directly to the server?
Here's my test with Sockets. For this, I "borrowed" ^((heh)^) some example code I found online. This throws an error saying gethostbyname(szSmtpServerName) is invalid.
Another attempt at sending an email was using cURL's code they have on their site. But it was saying #include <curl/curl.h> was invalid, even though I followed these instructions. I continued messing with the directories for at least three hours, to no avail.
If you know of a tutorial that could walk me through the whole process step-by-step, that would be totally amazing.
Thanks for the pointer. I did get a solution from the second link you gave - I downloaded curl from here and replaced the one packaged with vagrant. Proceeding further :)
Now I remember why I never updated it, I think the cert isn't installed correctly in the webserver. Both wget and curl fail to identify the cert:
--2015-11-23 18:17:40-- https://bmrf.org/repos/tron/sha256sums.txt Resolving bmrf.org (bmrf.org)... 168.235.80.214 Connecting to bmrf.org (bmrf.org)|168.235.80.214|:443... connected. ERROR: cannot verify bmrf.org's certificate, issued by ‘/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA’: Unable to locally verify the issuer's authority. To connect to bmrf.org insecurely, use `--no-check-certificate'.
[root@server ~]# curl https://bmrf.org curl: (60) Peer's Certificate issuer is not recognized. More details here: http://curl.haxx.se/docs/sslcerts.html
I had to fall-back to the other URL rather than bypass the certificate checks.
EDIT: PHP with OpenSSL also fails which is why I had the non-ssl URL to check the version in the index file:
SSL operation failed with code 1. OpenSSL Error messages:\nerror:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
If there is no login and putting in that dhtml url gets you the page you want you can use new WebClient().DownloadString(url)
to grab the text instead of using curl to download it as an html.
Grab the curl binary http://curl.haxx.se/download.html than use Process.Start Method (String, String) to execute the binary (I typed "msdn process start" to find this page https://msdn.microsoft.com/en-us/library/system.diagnostics.process.start%28v=vs.110%29.aspx). However you can use process (instead of process.start) and override stdout/stderr and get the page from that. To get the actual text in the page I recommend using Fizzler (use nuget to grab it) or regular expressions if it's simple. I'm leaning towards regex for this situation. Most of the time I use this page to see if my regex is correct http://www.regexr.com/
curl-library - Patch to add TLS 1.1 and 1.2 support & replace deprecated functions in SecureTransport
> Apple recently released OS X 10.8 (Mountain Lion) and made a number of changes to SecureTransport, adding TLS 1.1 and 1.2 support while deprecating a number of functions, several of which we were using (d'oh!). I've attached a patch that does the following if the builder is using Mountain Lion or iOS 5 or later
Is building a method/procedure for doing it natively in VBA the reason behind this?
Or are you just using VBA as the script-engine for SFTP download?
If the latter is the case, I'd use CURL -- See Ref: http://curl.haxx.se/docs/manual.html
Commandline-invocation of SFTP download with extensive additional options.
I downloaded this one: http://curl.haxx.se/dlwiz/?type=devel&os=Win64
Now I got that Win32 CAB download, and yes it seems to contain libcurl.lib files. But there is no curllib.dll anywhre in this one?
Now I completed the 4th step of the stackoverflow post i linked earlier and get error cannot open file 'curllib.lib'
> Unupgraded isn't necessarily the same as high uptime.
I never implied it was.
> Even with the intention of immediately doing an upgrade, on Arch that might not work. (I've had this situation once). Nothing to do with being an incompetent sysadmin.
Booting up an OS that hasn't been upgraded for months and then having it fetch upgrades over the network is pretty unsafe. Ending up in that situation isn't something that should happen. Computers require maintenance. Look at the steady stream of vulnerabilities in the library Pacman uses to download upgrades:
http://curl.haxx.se/docs/security.html
There are also plenty in GPG, although there are a lot fewer people auditing it to find them.
The certifcate bundle used by most embedded systems I have ripped apart is from curl, which in turn is based off Firefox's default cert store: http://curl.haxx.se/docs/caextract.html
I don't know why, but the one I see most often is a 2009 version of it: http://pastebin.com/ZUKSpPaL
Use a redirect to test the Roku client against some existing domains. If it works, then the device isn't using a pinned/static certificate. Buy your own, and run a test server using it.
> Well the wiki page does link to cURL's homepage at http://curl.haxx.se/
Chris, so your position is that rather than you providing step-by-step instructions for how Windows users should install curl, and which version of curl they should use, every Windows user that wants to use joinmarket should visit that link and figure it out for themselves?
you also note: "You need to move the exe somewhere in your $path$ variable, for example C:\Windows\ so that when you type "curl" on command prompt it will execute it."
I would bet a small percentage of average computer users understand what that sentence means, and your documentation doesn't mention that! It just says install curl.
There's probably some way to do it using Get and/or Post since it's likely going to be SSL with Cookies n stuff. Are you familiar with any server scripting languages like PHP? PHP has a ton of modules for stuff like this, to do all the login work and then just spit the data out, and if you had any place to host/run a PHP script, you'd just call THAT .php page from Tasker HTTP Get.
Back in the day I did a ton with PHP, and the cURL module was pretty handy. There's probably a hundred new ones for doing these types of things (logging into to various websites with complex login procedures etc). Here's a couple links regarding the ole' cURL - link1 link2
>However it looks like the libcurl multi interface will allow me to do a stream abstraction similar to http://curl.haxx.se/libcurl/c/fopen.html
Aww, their write_callback
is more efficient than mine since they use realloc. I really wish C++ vectors did that.
Actually I think for example the netty project doesn't have a different thread handle the request than the one unblocked by epoll/select. The handlers are simply called by whatever thread was handling that socket in it's select. So imho there shouldn't be a round trip through the OS scheduler, similarly in Go the goroutine scheduler is integrated with select/epoll I expect it's also similar in node.js. I'm not sure about Apache HttpClient but I'm guessing for that one you're right.
However it looks like the libcurl multi interface will allow me to do a stream abstraction similar to http://curl.haxx.se/libcurl/c/fopen.html
Ok looking at libcurl again it seems the multi interface should allow driving the connection and therefor implement streaming. My thinking is one would need to either implement an istream/ostream or abuse something like stringstream http://curl.haxx.se/libcurl/c/libcurl-multi.html this would be similar to http://curl.haxx.se/libcurl/c/fopen.html
What the heck...getting a weird error when it tries to grab a package that I just can't seem to find a Google answer for....
PS C:\hashicorp\vagrant\bin\image-dreamer> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'data-science-toolbox/dst' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
The box 'data-science-toolbox/dst' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
vagrant login
. Also, please double-check the name. The expanded
URL and error message are shown below:
URL: ["https://atlas.hashicorp.com/data-science-toolbox/dst"] Error: SSL certificate problem: unable to get local issuer certificate More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
Any particular reason to use WinINet? It'll introduce dependency on Windows that isn't necessary from what you describe.
libcurl provides a cross platform approach. There's countless examples.
wget -r ftp://somesite.com/directory -l 5 -np
-r recurses into directories, both parent and child
-l 5 or --level=** 5 tells wget to only go 5 folders deep from the folder you specified. If you wanted ftp://127.0.0.1/files/a, ftp://127.0.0.1/files/b, ftp://127.0.0.1/files/c, you would use wget -r ftp://127.0.0.1/files/ -l 3 or --level=3
-np specifies not to recurse into parent directories.
You can of course add more options to your command, this is just the most simple for your purposes. Look in the wget man page
You can also try rsync
or as I mentioned in my other post, cURL
The most important tool you're going to need to learn how to use to accomplish this task is Regular Expressions. These will allow you to match certain patterns against the actual source code you acquire. that way you can actually count the number of matches, and generate these statistics.
Secondly, and I correct in assuming that the task is to enter a URL of a webpage, and have your program parse it and extract these statistics? If that is the case, you're going to want to find an http client library for C (libcurl is a good one... http://curl.haxx.se/libcurl/)
Third, you're going to need to identify (and become familiar with) the various languages that are actually going to be included in these statistics. If you're going to write regular expressions to match patterns, you have to actually know what those patterns could be.
I started to write a bot in C++. Habe a look at libcurl to retrieve the response from Telegramm: http://curl.haxx.se/libcurl/c/https.html
I removed all ifdefs in the Main, since I want my app to be secure. The response from the Server is then parsed by jsoncpp: https://github.com/open-source-parsers/jsoncpp
Later I'll push my code to github, so you can habe a look into it.
Correct. This type of handle pattern is very common in C libraries, so it's good to understand it. For example, glance at a few of the libcurl functions and you'll see that they all deal with this CURL *
type as a handle that gets passed around. In that case, they decided to have the "init" function simply return the pointer instead of filling in a caller's variable as I alluded to earlier. But in this case, there's much less that can fail when merely initializing a handle; most of the meaningful failures occur when setting options or performing the action, and those functions return a status code.
Yes, mzalewski is right.
An alternative if you're on a Mac: Set the method
parameter of download.file()
to 'curl'
download.file(url, 'yourFileName.pdf', method = 'curl')
That parsed the url's spaces automatically and downloaded the file no problem for me. If you're on a Windows machine, you can download the curl binaries here: http://curl.haxx.se/, and I understand you can then use the curl method once you get that set up.
Sometimes you have to use C API in C++, e.g. with openssl or curl:
size_t read_callback(char *buffer, size_t size, size_t nitems, void *instream); CURLcode curl_easy_setopt(CURL *handle, CURLOPT_READFUNCTION, read_callback);
I'm not sure how to use references there.
Use libcurl. Seriously, if you really just want the functionality libcurl is nice. If you want to learn HTTP protocol by implementing it yourself, sounds like a neat project, have fun!
For libcurl, this example page has examples of everything you described needing. You should send a GET request to the URL you listed, capture the response using a callback function. Then parse the output you get and put that in the db.
Try out libcurl. I've been using it for a couple months and am quite pleased with my results (although my specific app isn't telnet it does have support for it).
Very cool, but I keep getting the error:
>curl: (60) SSL certificate problem: unable to get local issuer certificate. More details here: http://curl.haxx.se/docs/sslcerts.html
Not too clear on the details in the link, is there another guide to resolving this?
I'm not even close to being expert, but have dabbled with a similar project to yours, but here's my thoughts.
They're free, open source, and do everything you've dreamt of doing with file-up/downloading - they, probably do some things that you'd never dreamt possible.
They'll also all be much more robust than something you'd knock up as a side project.
Have a glance here for a quick idea of how easy can be to some complex transfers tasks.
You might be able to create a Google custom search engine with a list of the URLs you want to search.
If you don't find any existing solutions, you could roll your own with some scripting. Tools like curl and wget can be run from the command line and in scripts and can fetch lists of websites. You could either grab the text from each site and search it with grep or use the tools to run all of the searches against Google using the "site:" trick, except automated.
I'd look for search engines with a developer's API. If they exist, they might enable you to do what you're asking about, while also returning results in a format more suited to machine parsing, e.g., JSON.
Ok, you need to install libcurl (http://curl.haxx.se/libcurl/) into your system also.
When you get it to compile it might be that the ccminer doesn't work out of the box with your Titan Black and 780. This is because the ccminer is configured for the Maxwell architecture (CUDA compute capability 5.0) by default. You can change the target architecture to Kepler (CUDA 3.5 in your case) by changing the following line in Makefile:
nvcc_ARCH = -gencode=arch=compute_50,code=\"sm_50,compute_50\"
to
nvcc_ARCH = -gencode=arch=compute_35,code=\"sm_35,compute_35\"
You can also try to give this flag to the CUDA compiler for example in configure.sh but for some reason I didn't get it to work that way.
EDIT: You also need the following additional libraries: openssl (libssl in Linux), pthread (libpthread) and mpir (mpir.org, version 2.6.0 should work).
I've seen implementations (in other languages, at least) which first "repair" the HTML to convert it to full-on XML. A cursory Google search seems to indicate that libxml2 does offer support for HTML in this way, though I have no experience with this myself.
If the issue is that libxml2 has poor handling of HTML errors, then it might make sense to feed it into an HTML tidy library similar to what is done in libcurl, before then feeding it in to libxml2.
There is a simpler, more direct approach to doing this sort of thing. curl has built-in support for cookies (to maintain the session you've logged into), and it would probably be fairly easy to grep
the HTML output for the information you're interested in.
Haha, didn't even know you can open the URL directly (I don't use standalone player often). That solves the broken ad problem. Does the game cache for you that way, or do you see the loading bar moving every time?
Here is a small batch script that always starts the latest version:
It requires cURL and you might need the path to your player.
Thanks for replying. I'm trying libcurl for the first time now. Building it went fine. When including <curl/curl.h> the compiler gives me no errors, so I guess the path (according to your description) seems to be fine. However, when using a function like curl_easy_init(); I receive the "undefined reference ..." error. In the official tutorial there are following commands (terminal):
$ curl-config --cflags -I /usr/local/include & curl-config --libs -L/usr/local/lib -lcurl
The thing is, I don't really know what to do with them.
EDIT: Okay, I got it! I Have to link to the libraries with the -lcurl parameter. Simple compilation looks like that:
$ gcc main.c -lcurl
Still don't know why that works. How does gcc know what -lcurl is?
Sounds like a redirect? See Section 3.8
In PHP you do it like this:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
Add it to your opt array, see if it helps?
"header" => join("\r\n", $headers, "Connection" => "Keep-Alive"),
Like this?
EDIT: Oh im an idiot, put it in the correct place now.
$headers = [
"Accept" => "/",
"User-Agent" => "Mozilla/4.0", // http://curl.haxx.se/docs/manpage.html
"Connection" => "Keep-Alive"
];
Same error occured.
Very interesting. Will it handle the IMAP stuff? I have been using that recently and have been dying for a reason to look at Rust for something that will scratch an itch.
How do you generate the CSVs now? It might be a lot easier to just try to make those processes output correct sort of XML and use something like curl to talk with their API, through simple scripts/batch files.
As the plan is right now (if I even understand it correctly), you're in way over your head, unfortunately.
Curl/Libcurl http://curl.haxx.se/ - if you run any apps on a network you should get to know that tool. You can test roundtrip speed/ssl certs/headers/etc. Whenever I have to troubleshoot anything I usually run a curl to the place in question to see what's going on that a browser wouldn't show.