Permalinks may be a thing of the application you're proxying to, they aren't an nginx thing.
In your example, let's follow the logic in your config:
root
value and try to load a file or directory in /usr/share/nginx/html/hello-world
, then it tries to load /usr/share/nginx/html/hello-world/
, and then if it can't find those files/dirs it performs an internal rewrite of the URI to /index.php?args.You can read the details in the try_files documentation
Have a look at the docs for the ssl module, specifically the variables section. It gives a bunch of outputs that you can add to logs, including $ssl_client_verify
which contains the failure reason.
OK. As long as they're on separate IPs, you can do both TCP/UDP and HTTP proxying. It'd look something like this:
http { server { server_name www.example.com; listen <www ip address>:443 ssl; <ssl parameters>
location / { proxy_pass http://192.168.2.20; } } } stream { server { listen <tcp ip address>:443; proxy_pass 192.168.2.21:9050; } }
The nginx stream module is how nginx handles TCP and UDP proxying, you can read up on it here: https://nginx.org/en/docs/stream/ngx_stream_core_module.html
The reason for my question earlier is, you cannot have a HTTP server context listen on the same IP:Port as a Stream server context.
I use:
gzip_min_length 860; gzip_comp_level 5;
because anything under that length actually has a good chance of becoming larger with compression enabled, and for really tiny responses, for example some AJAX stuff, it removes the compression overhead making it a few milliseconds faster.
For the compression level, the compression ratio drops off significantly when gzip_comp_level is set to over 6, so I just set it at 5 to get very good compression without incurring to much CPU overhead. See this serverfault answer for a good example on that.
upon further investigation it's considered bad practice to use gzip for binary files or web images in general. using gzip can INCREASE the file size. check this
> Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.
You could use something as trimage to optimize your JPEG.
You could create a custom log_format, and add the $args variable to that. Or you could, in the php location, add a new header which returns the value of args, like:
add_header nginx_args $args;
In your given example of /hello-world, $args is empty. Nginx gets to the php handling location by using the fallback in try_files to rewrite the request from /hello-world to /index.php.
Now, the question is, how does your php application know that it was /hello-world originally!? Well, that's handled by this part of your config:
include fastcgi_params;
If you look at that fastcgi_params file, you'll see nginx setting a number of fastcgi parameters. One of them is REQUEST_URI, which contains the original, pre-try_files rewritten URI (or path).
You can tell them that you can't find any information on the module and that it isn't listed in the nginx directive index: https://nginx.org/en/docs/dirindex.html
Sounds like your vendor needs to provide more information.
It sounds like you need to make your desired server context in nginx the default one. ALB seems to be testing based on IP address and not passing along a HTTP Host header, and is getting the default nginx server context.
server { listen 80 default_server; listen 443 ssl default_server; }
https://nginx.org/en/docs/http/ngx_http_core_module.html#listen
return 444
is a special nginx return code, which immediately cuts off all communication with the client. If you want to return a status to the end user, use a normal HTTP 403 repsonse code. Used in conjunction with the error_page directive you can spend back a stylized HTTP 403 response.
You can hide the version, but it looks like you can only disable the Server HTTP header if you have the commercial subscription: https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens
Otherwise you'd need to download the source, modify it so it doesn't send the Server field, and then recompile.
The nginx equivalent of RewriteRule is rewrite and it works almost exactly the same way.
rewrite "^(.*?)\.[0-9]+\.(css|js|svg)$" "$1.$2";
I would approach this using variables, and proxy_pass. Have you seen this?
The top answer is interesting. It uses server_name ~^(?<subdomain>.+)\.domain\.com$
to split the subdomain off the HTTP_HOST header being provided by the browser.
I'm willing to bet that rewrite ^ /profile/$subdomain$request_uri;
could be rewritten to be proxy_pass http://192.168.24.46/$subdomain$request_uri
It's getting late and I don't have an environment to test it in right now so it might need some creative tweaks. Hope it helps.
SSH and SSL are seperate things.
They do use very similar key/cert pairs, but SSL is Secure Socket Layers (and has one extra piece), and is basically an addition to HTTP.
SSH is a Secure SHell that allows you to open shell prompts on remote OSs.
They both use public/private key pair encryption, but you shouldn't use your servers SSH keys for SSL.
You will also need to have your public certificate signed by a trusted authority. Once you generate a proper SSL key pair, you will also generate a CSR which basically allows some SSL provider to verify to other users that you are who you say you are (from a domain and webserver perspective)
Digital Ocean has a pretty good write up on this, specifically doing it for free through let's encrypt.
*SNI :)
To learn more about which browsers support SNI check out http://webmasters.stackexchange.com/questions/69710/which-browsers-support-sni
Yes, migrating to Nginx can solve the load issues absolutely. You can use (parts of) my config: http://pastebin.com/FtDdGTeX And implement the Nginx FastCGI Cache: https://www.digitalocean.com/community/tutorials/how-to-setup-fastcgi-caching-with-nginx-on-your-vps
You need to stop nginx while using the standalone Let's Encrypt installer. Later you can then run it with a webagent.
Well, it looks like you can. Although, if you can, you'd want to make sure your servers are talking via HTTPS
or over some secure channel, definitely not HTTP
as the link I posted suggests.
I didn't know I could put all the server declarations in one file.
Also if you look at Step 3 of the DigitalOcean tutorial, it instructs us to create a new file for every server block. Is this unnecessary? or good practice.
Thanks, I'll try it out.
Nginx does handle SMTP, however it's mostly used as a proxy server in front of real mail servers.
https://www.nginx.com/resources/admin-guide/mail-proxy/
I would probably recommend that you looked into various SMTP server config options for piping incoming emails to scripts; eg.
http://serverfault.com/questions/506894/how-to-route-email-to-a-script
(Google whatever mailserver you use)
Nginx is quite flexible, however I think that it's the wrong tool for the job you are trying to solve :)
I'd like to suggest CrowdSec for this. It's free and open source and based on crowdsourced threat intelligence. Think if it as an advanced version of fail2ban.
What I mean by crowdsourced here is that data on attacks are shared between all users, thereby helping each other against the bad guys out there, to put it very shortly.
LearnLinuxTV just released a video on how it works and how to set it up with nginx. But in reality that's just one possibility; another is to use Cloudflare's free tier along with CrowdSec to fight DDoS specifically targeted the application layer. It sounds a bit like you could use that.
Disclaimer: I am head of community at CrowdSec and an avid user myself. If you have any questions after checking it out, please give me a buzz here or at our Discourse. Looking forward to helping you with this exteremely annoying problem; especially since it's the very core reason why CrowdSec was created; to stand together and fight back against the bad guys!
This is fantastic news! Thank you for sharing these configs as well.
I'm using SSL cert from letsencrypt.org . So I could possibly use that one cert for NGINX and the subdomains would not need to be SSL, right? I'm very new to SSL so I may be oversimplifying this.
Outside of webservers, I do also run a VNC server on one of my machines in my network. Is there a way to use NGINX for routing VNC traffic? I think web based VNC viewers exist. Could I use that to connect to the VNC server?
​
Thanks again for this information!
rewrite ^(/.).html(\?.)?$ $1$2 permanent;
The above rewrite will never contain any query arguments. Rewrite can have query arguments (?.*) in the destination, but the first parameter only operates on the path portion of the URL. It's not going to cause you any issues, since you have a ? on $2, but $2 will never actually be filled with anything.
I think the whole config could be simplified a bit, try:
server { server_name example.com www.example.com; listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / { root /var/www/html/example.com; rewrite ^/(about-us|gallery|contact-us)/? /$1.html break; } }
Notes:
index index.html
is the default value. https://nginx.org/en/docs/http/ngx_http_index_module.html#indexThose files are created by your editor (nano) when it crashes.
Nginx reads all files in sites-enabled so obviously this crashed file will be read too.
You can avoid this problem by properly exiting nano (no idea what you're doing for it to crash but you're doing something). Or you can edit files in sites-available instead (the proper way to use these folders is to put files in sites-available and put symlinks to them in sites-enabled).
From documentation :
NGINX supports WebSocket by allowing a tunnel to be set up between a client and a back-end server. For NGINX to send the Upgrade request from the client to the back-end server, Upgrade and Connection headers must be set explicitly.
Try adding:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
I use FORGE from Laravel. It handles adding domains, all SSH, and so much more. It's $10 - $19 a month depending on your needs and connects directly with Digital Ocean and more.
​
This is a perfect use case for the map module. It supports regexes and default values, so you can create a new variable with the desired php version based on the $uri
variable easily.
> nginx: [emerg] "stream" directive is not allowed here
The stream directive is at the same level as http
and mail
. If you are including a stream
context under http
, it will throw that kind of error.
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
A plain location is a literal string match, you need a regex enabled location. Try:
location ~ /scorpa/sa/component/(.*) {
Location docs: https://nginx.org/en/docs/http/ngx_http_core_module.html#location
Sorry, I'm not going to pull up the webpage. Your requests to /marlin/ are being proxied correctly it seems, however the HTML response appears to reference /static/. /static/ is outside of /marlin/, and unless nginx knows where to find those files, are probably going to result in a 404.
Use your browsers network debug tab to see what files aren't loading. Then update your application's code to ensure all of the HREFs (src=, href=, etc) are directory relative or use /marlin/ at the beginning.
You can try and fix this with nginx's sub_filter module, but it's not elegant, and you'll probably have to do a lot of tweaking to get it working. https://nginx.org/en/docs/http/ngx_http_sub_module.html#sub_filter
I'd bet that you have another server
context which is answering the request. Try running sudo nginx -T
, and look for multiple listen 80
server contexts. Either combine them, or use server_name
to distinguish one from the other.
If you're logging your upstream address, status, response time, check those logs.
If you aren't, then you probably should add that to your log_format. eg;
log_format up_head '$remote_addr $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' 'upstream $upstream_addr $upstream_status $upstream_response_time';
Then, double-check those logs to see if nginx is returning the 413, or your upstream.
You could use the alias
directive:
location /list { alias /var/www/; autoindex on; }
If you only want to show only the directory without allowing any subdirectories you can use location = /list
instead as this would break any links but I wouldn't recommend using that - it feels just ugly. See the reference for the autoindex directive here.
Well, I'd say the issue has nothing to do with nginx, but with the way you have those containers set up. It sounds like you have the cotainer ports exposed to all network interfaces including the public one, so they are reachable from anywhere, and the request doesn't even hit nginx (which listens only on port 443 based on the config you provided). When you run docker ps
or docker port
, you'll probably see something like `0.0.0.0:25565->25565/tcp' - the zeroes mean that the container port is exposed on all interfaces.
When you want to use nginx as a reverse proxy, you generally let the services (in this case docker containers) listen only on localhost:someport, so it cannot be reached directly from the outside. The request then has to go via nginx, which will then pass the request to the service.
By the way, you're trying to proxy HTTP requests to the minecraft container, which probably won't work - I assume minecraft uses some custom protocol over TCP, and not HTTP. If you really want to hide it behind nginx, you'll have to use the stream module instead of the http
module.
Nginx does have a configuration API, but only in the paid (nginx plus) edition.
I'm not sure if I understand your use case correctly, but if you're heavy on microservices, have a look at Traefik, as it is build with exactly that use case in mind.
Can you show us your configuration file?
What about https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name?
Check out nginx's webpage for the default_server directive:
https://nginx.org/en/docs/http/request_processing.html
The default_server directive is for when nginx doesn't know where to to send a request because the host
field in the request doesn't match any of the defined servers. If you just want all unknown requests to go to your https site, then you would set that server as the default_server.
A somewhat uncommon, but not unheard of request, is to be able to mirror traffic to multiple backends. Previously you could do this with post_action
, but that's a really poorly documented directive, and I suspect pretty buggy.
The new module will let you start a subrequest to issue the client's request (with or without the request body) to a different URL.
Reading over this site:
it sounds like it's going to take some trial and error to find a config that works. Try removing the server_name
directive from your config and see if that works.
Alternatively, it looks like the files that Synology uses for backups are located here:
/usr/syno/share/nginx/WWWService.mustache /usr/syno/share/nginx/nginx.mustache /volume1/@appstore/WebStation/misc/nginx_default_server.mustache
Obvious disclaimer...Modify those at your own risk.
Digital Ocean has a tutorial on setting up password protection on nginx.
Not sure what distro you are using, so you might need to adapt it slightly if you aren't using Debian or Ubuntu.
So SSL identifies which certificate it wants to use by hostname. This is called SNI. There is also the concept of virtual hosts, or vhosts. What this means is that you can accept ssl connections for multiple domain names using the same proxy. This is a pretty decent walkthrough on the approach. For each server block you would just specify a different ssl config and server name.
This is how you set up PHPMyAdmin with MySQL https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-phpmyadmin-with-nginx-on-an-ubuntu-14-04-server
I'd suspect with PHPPgAdmin it's a similar process but I've never used it so can't tell you for sure.
So I tried capturing traffic with Wireshark, as suggested.
After I isolate the HTTP GET request, the next entry is a TCP entry which has the reset flag set, which I guess shows the same thing as the browser.
After that TCP packet, I see three more [SYN], [SYN,ACK], and [SYN], before the HTTP GET request is sent again, this time being slightly different as a "Cache-Control: max-age=0\r\n" line is added to it.
After this second one is sent, I get 200 OK from the server, and the page loads correctly.
I put a dump of the Wireshark capture here, should someone be interested in looking at it.
I would appreciate if you could help me understand what is going on.
Need a little more clarity on the issue: is example.com representing your site on the net, or a site you are trying to connect to? i.e. connecting to your site which is behind the reverse proxy breaks when VPN is on?
Anyway, this might be possible depending on the situation, you would need to configure a split tunnel using route rules to exempt certain IPs/IP ranges from being sent over the VPN. The problem you are likely having is that when the VPN is active, it's picking up all outbound traffic and sending it out the VPN connection. It's probably creating a situation where the request is coming in your public IP to the reverse proxy, but then the response is going out the VPN. You need to add exceptions to traffic that still need to not go out the VPN.
This is the same basic problem like when you turn the VPN on and you can't reach your local NAS or printer. With provider app based VPN connections, you can exclude based on the app a lot of the time, but if you are using OpenVPN on the Pi to connect to ExpressVPN, you'll have to use route rules to create the exceptions, which means IP based.
But it is going to depend on how things are setup in your network.
This guide might help.
It looks like you're not setting the actual html file for the 404, and also don't have a location block for it.
I don't see an error log defined in the config you pasted, but here are the docs for it: https://nginx.org/en/docs/ngx_core_module.html#error_log
If you have an error_log for your server context, try bumping the verbosity up to info or debug. If you don't have an error_log defined, define one. I was seeing permissions related errors at the default verbosity (error), but maybe you'll see more at higher levels.
This nginx doc has what you're looking for:
> When searching for a virtual server by name, if name matches more than one of the specified variants, e.g. both wildcard name and regular expression match, the first matching variant will be chosen, in the following order of precedence:
> 1. exact name > 2. longest wildcard name starting with an asterisk, e.g. “.example.org” > 3. longest wildcard name ending with an asterisk, e.g. “mail.” > 4. first matching regular expression (in order of appearance in a configuration file)
> EDIT: Oh crap.. I think I see it now.. The scripts have static links to /
Damn.. I should've thought of that.
> Is there even a way to fix that?
You found the problem quick, nice! There are a few ways of handling this:
Assuming that your nginx instance is proxying to a separate server, it only ever knows if the ip:port are no longer responding to requests. It doesn't have an understanding that a server is down, or a service, but just that it is unable to communicate.
In the open source version of nginx, this sort of health detection and failover is handled passively, that is to say that a request must fail to a proxied server before nginx marks it as down. In the commercial version (and with some 3rd party modules), there are active health checks, which ensure that before a request from a client is proxied, the backend server it is proxying to is healthy.
In the open source version, these are tweaked in two places;
upstream
module, you can set how many fails it takes before a upstream server is marked as "down", along with how long it'll be "down" for, mark a server as backup, etc. Documentation here.upstream
group.Make note of any mention of "commercial" in the docs, as those directives require the paid version of nginx.
Another option is to use the proxy_bind directive with the transparent flag. Should work for both Stream (l4) and HTTP (l7) proxying:
Though how the proxied application uses that IP may mess some stuff up.
are we talking about layer 4 (tcp) or 7 (http) proxying?
For L4 (TCP) you can enable to `proxy_protocol` by adding this attribute to the `listen` directive.
Then
```
proxy_protocol on;
set_real_ip_from $proxy_protocol_addr;
```
see: https://nginx.org/en/docs/stream/ngx_stream_realip_module.html
For L7 (HTTP) you can use different headers like `X-Real-IP `
The backup server will take over as soon as NO OTHER primary server is able to take over the request.
See:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
"marks the server as a backup server. It will be passed requests when ALL OF the primary servers are unavailable."
You could try one of these options:
1) Use the map module to convert the API keys to some variable (default 0 = not authorized, or 1 = authorized for known keys). You would then check this variable to decide whether to allow or deny the request. You could store the map in a separate config file, but the downside is that you would have to maintain two maps, if you also want to map the api key to a rate limit zone.
2) Write a simple backend service, which would do the check and then use the X-Accel directives to 'return' the request back to nginx for further processing. The way this works is you proxy_pass
the request to the service, and if the response from that service has the X-Accel-Redirect
header, nginx will then re-process the request again in a new location matching the URL specified in the value of that header. There's also a X-Accel-Rate-Limit
header which you could set in the response.
It seems you're editing the core conf file i.e., ngnix.conf. Trying going into sites-available and making your site config in there.
conf files in sites-available and sites-enabled are for Virtual hosting or server blocks or whatever they wanna call it.
Anyway, this guide should help you out some more: https://www.digitalocean.com/community/tutorials/understanding-the-nginx-configuration-file-structure-and-configuration-contexts
Please remember, rate limiting is not the same as DoS protection. To make your site robust, please implement fastcgi caching. It makes your site way faster, way more robust, takes you higher in the Google index and makes everybody happier. Info: https://easyengine.io/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ More info: https://www.digitalocean.com/community/tutorials/how-to-setup-fastcgi-caching-with-nginx-on-your-vps
Of course it's a good idea to have rate limiting on your xmlrpc.php and wp-login.php but not because of DoS attacks but because of brute force attacks. Good luck!
Your problem is the MP stack meaning MySQL and PHP and also your code(probably) Nginx alone can do pretty much a shit ton of requests/second.
To skip all the bs just set this up for your wordpress site. https://www.digitalocean.com/community/tutorials/how-to-setup-fastcgi-caching-with-nginx-on-your-vps
i guarantee you will be amazed of what you see. You can alternatively setup w3 cache and but it's not as fast.
if you want to optimize your MP stack then do this. 1) Upgrade to php 5.6, install and configure Opcache 2) Configure/Optimize your MySQL server and if it's not MariaDB or Percona switch to that. 3) Optimize your code: if you have a gigantic piece of shit monstrosity no amount of caching other then nginx fastcgi cache and varnish will make it run fast.
It takes knowledge to tune any system and a few hours of benchmarks, bonnie++, iozone and sysbench in order to find just the right configuration for everything.
Your VPS is fine, many people run nginx on a 1G or less.
Have you tried using the settings listed in DO's nginx optimization guide?
https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
What other services are running eating up ram/cpu?
I agree with the other posters that it's no big deal what you're seeing - as long as they hit something which isn't there. But if you want to block it anyway (or just wants to make sure future attempts of more nasty attacks will be blocked), I'd do it with CrowdSec which would watch the nginx log and block those attemps when it sees them (if they aren't blocked already based on signals from the crowd - meaning that the same ip already attacked other users. In that case it would be blocked in your instance as well as every other relevant user's)
There is still the default server block configured in one of the config files, remove it.
After that, append default_server
to the listen
directive.
Also don't use ""
for server_name
, as it has a special meaning, the typical placeholder for catch-all servers is _
.
Are you familiar with absolute_redirect directive? It sounds like what I want but also does not work correctly. They mention this as a possible solution here.
And by the way is there a possibility to debug those rules in nginx somehow?
Ah cool okay. Not sure that building from source would give you any benefit for open ports.
Disclaimer: I'm an nginx employee, and we recommend using the prebuilt packages from nginx.org as best practices.
As much as I hate to say it, there's no 'perfect' solution for Wordpress as all variables are different (different server, load etc etc).
However, changing up Apache and PHP with nginx and php-fpm will see you in the right direction. Digital Ocean do a good guide for getting you started. I myself have used it a few times actually.
You'll need a LEMP stack first - https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-12-04
And once you've done that, check this guide - https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-nginx-on-ubuntu-14-04. You should get running fairly sharpish after that.
I intend to create a project which will automate such process and allow lesser experienced nginx users to be able to get started with it on Wordpress quicker.
First off, do NOT use "mydomain .com". Someone owns that domain. ICANN has specific domains reserved fro examples so as not to fuck with crawlers. example.com is the most common one. Never use valid domains you do not own as examples in your help posts. It is highly frowned upon.
If I'm reading your question correctly someone messed up here. Your AD domain should never have been set to the domain name. It should have been set to a subdomain for AD. This goes against Microsoft's recommendations for AD domains and AFAIK there are no graceful solutions to this. With a Nginx reverse proxy you can direct traffic directed to different URLs to different servers listening on the same port behind a router. Exchange hosts services on 80/443. What you're doing would break services on the network.
What you should do is correctly name your company domain. For some hacky solutions you can see here: http://serverfault.com/questions/526205/my-public-website-name-and-ad-domain-name-are-the-same-how-can-i-get-to-my-exte
This is permitted behaviour under RFC 7239. You can use the real_ip_recursive function so when there are multiple trusted load balancers, meaning the last non-trusted IP (after a set of trusted load balancers) is set to the real IP.
You can read more about this here: http://serverfault.com/questions/314574/nginx-real-ip-header-and-x-forwarded-for-seems-wrong
Is it just me or is the cdn for nginx completely broken and slowing down nginx website? I have been trying to find out what Amplify is but the whole site has this issue, can anyone confirm? CDN url that fails for me with timeout and no response code https://cdn.jsdelivr.net/
Also adding: http://www.downforeveryoneorjustme.com/jsdelivr.net
Cloudflare provide a list of source IP's so you can block all traffic except Cloudflare. This works well.
This is always better done as far upstream as possible so nginx may not be the best place to do it. Attackers may not be able to access the site but would still be able to make HTTP requests and use server resources.
We typically do this at the firewall level either in iptables or an edge firewall so that all traffic except cloudflare traffic (and our admin IP's) is just dropped :-)
Fixing markup since old reddit doesn't support triple-backtick code blocks: server { listen 80; server_name app-one.domain.com;
location / { proxy_pass http://app-one:3000; } }
Defining an upstream is most useful if you have multiple servers that you are proxying to. Additionally, defining an upstream allows you to set a few additional connection parameters, such as keepalive, fail timeouts, maximum number of connections, etc.
Check out the upstream module docs, and the server directive, to see if you're interested in any of those: https://nginx.org/en/docs/http/ngx_http_upstream_module.html
You can use the proxy_bind directive with the transparent flag to make it seem like the original client IP initiated the request, instead of nginx.
An nginx.com doc on this kind of thing, though a bit different than what you're looking to do I think: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
403 error could be a few things. Can't tell exactly, just from this small config snippet. This article may be of use.
You should have some config in this file which defines the connection. Usually this is setup by default, which is strange.
Can you tell me what other files are in that pool.d directory?
If the www.conf is the only file, do this:
Follow this guide for php-fpm setup: http://www.rackspace.com/knowledge_center/article/installing-nginx-and-php-fpm-setup-for-php-fpm
Then update the Nginx fastcgi_pass directive to:
fastcgi_pass unix:/var/run/php5-fpm/DOMAINNAME.socket;
Sorry if this is a bit rough, typing this on my mobile!
Via HN. Discussion: https://news.ycombinator.com/item?id=7808583
Commenters discuss pros and cons of using syslog. Advantages include the posibility of centralizing logs from multiple servers. One user points out that syslog uses UDP, so log data may be dropped silently, but then another user chimes in and mentiones rsyslog (a syslog implementation), which has TCP support.
Pretty much any log analysis tool will do what you want, though they may need some format tweaking to understand however you have your nginx logs formatted. eg;
Ok, I thought there were more out there, but I'm sure you can find something that meets your needs. It's been ages since I've looked at log analytics though, last time I did it was before Google turned Urchin into Google Analytics.
Worst comes to worst, you can do manual analytics using awk/grep/whatever to grab the request URI log field and do whatever tabulation you want.
Thank you so much, I had forgotten to change that from the default. I just did, and now the homepage is up, but none of the theme images show, and none of the other paths work (e.g. /wp-admin).
Castr isn't as cheap as I need it to be, neither is Restream.io. And Mobcrush is free, but I don't know for how long, and there is an issue with me restreaming to a custom stream with them.
So I created my own nginx server using a 12 month free trial with AWS EC2 and created an Ubuntu instance.
It all works perfectly with nginx. I can restream to the three platforms I need. One platform is totally automated. But Facebook and YouTube are not. So while I can restream to them, I have to start new streaming instances on each website in their GUI before I can stream.
This post here helped clear it up: http://serverfault.com/a/317678
My config now looks like:
server { listen 80; listen 443 ssl; server_name site.com www.site.com; sendfile off;
charset utf-8; root "/var/www/site.com/public"; index index.html index.htm index.php;
location /affiliate { root "/var/www/affiliate"; location ~ .php$ { fastcgi_pass 127.0.0.1:9000; include snippets/fastcgi-php.conf; } }
location / { try_files $uri $uri/ /index.php;
location ~ .php$ { fastcgi_pass 127.0.0.1:9000; include snippets/fastcgi-php.conf; } }
error_log /var/log/nginx/site.com-error.log error;
location ~ /.ht { deny all; }
#location = /favicon.ico { access_log off; log_not_found off; } #location = /robots.txt { access_log off; log_not_found off; }
ssl_certificate /etc/nginx/ssl/site.com.chained.crt; ssl_certificate_key /etc/nginx/ssl/site.com.key; }
snippets/fastcgi-php.conf:
# regex to split $uri to $fastcgi_script_name and $fastcgi_path fastcgi_split_path_info ^(.+.php)(/.+)$;
# Check that the PHP script exists before passing it try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info # see: http://trac.nginx.org/nginx/ticket/321 set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info;
fastcgi_index index.php; include fastcgi.conf;
you should just need to use regular expressions to define the URLs to rewrite or use the if structure
http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
edit: this looks like the if structure to detect query parameters: http://serverfault.com/questions/160790/nginx-rewrite-for-an-url-with-parameters
This is a very common problem. The best place to solve it is in your upstream, which generates the (usually) HTML response. There's a nice and easy configuration setting for DokuWiki, but I'm not finding a similar configuration for Synology.
You can try using the sub_filter module, but it can be tough to get all of the changes necessary, with lots of trial and error.
Doesn't look like it's an issue with DNS at all, or with the server serving you pages on the right domain, but rather with your software package (dokuwiki) and its backends.
https://www.dokuwiki.org/faq:blankpage
That might help? Check your page source on the broken link and see if anything is popping up. And you can always check your syslog to see what broke/is breaking. My best guess is some php dependency isn't being reached correctly
I've used this as a reference point to get the type of metrics I need for nginx.
https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#alerting-on-http-response-code
However, I have instrumented using a variety of Grafana + nginx plugin, appdynamics metrics browser extension and logs to Splunk.
Happy to go through a specific example depending on which route you take.
Ok, I was fighting the inevitable by trying to make it only available on the intranet by a name. you both helped clear that up.
I have a domain name on the www.no-ip.com service setup on my router and an A record that i basically never use just sitting out there. the service is connected to my router So I put in a port forward from port 80 to port 81 and updated the "server_name" to the domain that i have an A record for and that seemed to have worked ......not sure if that is proper...but it worked!
I imagine that i will want to use a different server port...not sure if the incoming port should be different
nmap just gets this from the default page that gets served up.
I think what you're trying to ask is how to block nmap from scanning your http server - in which case the only real way is by User Agent blocking.
Simple google searches result in the default user agent string for nmap being Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
- granted the client can change this easily.
So to block this with nginx - you could do something like this in your location block:
if ($http_user_agent ~* (nmap scripting engine) { return 403 }
I wouldn't really do this though: https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ - seems like a heavy cost to hide a piece of information that's easily seen.
I think you're out of luck there - If you were using apache with mod_autoindex you could use:
IndexIgnore *.torrent
But looking at the nginx module documentation there's nothing similar.
1.install openssl on your setup
sudo openssl req -newkey rsa:4096 -nodes -keyout /tmp/tempKey.key -x509 -days 365 -out /tmp/tempCert.crt
ssl_certificate /tmp/tempCert.crt;ssl_certificate_key /tmp/tempKey.key;
sudo systemctl restart nginx
https://certbot.eff.org/lets-encrypt/debianbuster-nginx
sudo certbot --nginx
check your config if correct, the temp keys should be disabled and certbot certs should be enabled in the server block, preferable also a redirect from http to https
restart nginx again, simply having the SSL certificates should give you the bare minimum to run a ssl reverse proxy, there is so much more in terms of ciphers, headers, stapling etc
NB: You should consider redacting the value you've set for your server_name - for your own privacy.
The reason this isn't working is because you are using the same server_name value for both server blocks. Nginx will only use the first (iirc) of the blocks specified. Changing the server_name value for the second server block will get this working for you.
As for the directory structure, yes this will work (if you fix the server_names), however as /var/www/html
is default directory root, you'll see everything that's in it's child directories. I suggest putting them inside their own directories under /var/www/sites/
instead. This will keep things tidier and prevent people from stumbling across other websites hosted on the same server. This may not be much of a concern when you're starting out, but becomes more important when you're hosting more sites and/or for multiple people/clients.
You may like to try this config:
server { listen 80; server_name test1.example.com; root /var/www/sites/test1; index index.html; }
server { listen 80; server_name test2.example.com; root /var/www/sites/test2; index index.html }
After you've got this sorted, I'd also suggest adding unique log files for each of your server blocks, this helps separate log trails per website. https://docs.nginx.com/nginx/admin-guide/monitoring/logging/
You also need to look in to Let's Encrypt so that you're running your sites over HTTPS instead of HTTP. https://certbot.eff.org/
To what end? By adding additional servers/points of failure to your SSL configuration, you're only decreasing the level of security they provide. You'd essentially have to give your private key to servers that don't need to have it...
That said, the Let's Encrypt! project aims to automate and simplify the entire certificate management process. It's backed by the EFF and a bunch of other big names, so I'm fairly certain we'll see wide adoption in years to come.
Unfortunately you wouldn't be able to do this simply with NGINX and it would involve another application to repeat the stream to multi platforms. First Google result for multi-casting to social media brings https://castr.io/. I haven't used them but from their website it appears it'll do what you need relatively cheaply.
You can, definitely. You need to have a script or a service that listens for the requests from fastcgi. For Perl, it's usually a script of some kind. Here's an example of using a wrapper script with Perl:
http://people.adams.edu/~cdmiller/posts/nginx-perl-fastcgi/
Ruby has mongrel, http://rubygems.org/gems/mongrel, a webserver that can be passed back to with fastcgi.
I don't have much experience with Python and fastcgi, but my understanding is that it can be done with a wrapper much like Perl.
PHP-FPM is the FastCGI process manager (aka "FPM") for the PHP interpreter. So those are the same thing. When people run PHP as FastCGI they're actually running PHP-FPM which loads the regular php interpreter internally.
> also, is there some newer software that is better than cgit?
Oh yeah there's loads of stuff out there. gogs is popular but gitlist is pretty low overhead (if that's what drew you to cgit to begin with). Gitlist is written in PHP as well so your stack would actually get simpler since you'd just be running two different PHP apps at that point. Assuming Gitlist does everything you want though.
Hahaha that is a coincidence - here is what I'm trying to do. I just have very little experience with Docker and even less with Nginx & certificates. I'm sure someone who had a clue what they were doing wouldn't be having the issues I am.
Good morning,
Maybe, there is something wrong with the virtual machines network. Have you tried to get HTTP requests timing from the reverse proxy server to the web server? You can do that with cURL.
HTTP Basic authentication exhibits as a pop-up on the client, like you implemented with auth_basic
in nginx. You would have to look at the documentation for what you're proxying to, to see if it accepts HTTP Basic auth. Some applications won't support it, but will get confused if you say, "Hey, here's this Authorization header, do something with it".
In this case, whatever is running on 127.0.0.1:2246 was either rejecting the Authorization header nginx was sending, or something else weird was going on.
You can add the $upstream_addr
, $upstream_status
and $upstream_response_time
variables to your logging format (See: https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format). Doing so can help you determine if nginx is generating the error/unexpected response, or your backend application is.
Does this application use websockets? If so, try: https://nginx.org/en/docs/http/websocket.html
Review your error log, try adding $upstream_addr / $upstream_status / $upstream_response_time to your logging format to help ensure you figure out if nginx is returning an error, or an upstream.
Since you don't provide any useful environment details, here's an equally bland boilerplate config:
server { listen 80; listen 443 ssl; listen 3000;
ssl_certificate_key /path/to/key; ssl_certificate /path/to/cert;
location / { proxy_pass http://127.0.0.1:3000; } }
Here are the nginx docs: https://nginx.org/en/docs/
I'm seeing *.domain.com
as your only entry in server_name
If you want to allow domain.com as well your entry should be:
server_name *.domain.com domain.com;
If you want a deeper understanding on what nginx is choosing and why, you could always turn on debugging in the error logs. See: https://nginx.org/en/docs/debugging_log.html
sub_filter
has three things that might be limiting it from working for you:
root is absolute for the local server directory pathing, every time you invoke it you set the root for that location (and any nested under it).
I bet you can get away with removing the root and it working, since you have root set in the server
context.
There are two directives you can use to tell nginx where to look for local files:
For example:
# Incoming URI: /images/path/to/file.jpg
location /images { root /usr/local/www/; } # Looks for /usr/local/www/images/path/to/file.jpg
location /images { alias /usr/local/www/; } # Looks for /usr/local/www/path/to/file.jpg
Here is the nginx.org doc on setting up HTTPS: https://nginx.org/en/docs/http/configuring_https_servers.html
If after you've checked your config, you're still having trouble, it could be one of the following:
sudo nginx -T
(case matters)server_name
set incorrectly, so the default 443/ssl server context is answering, returning the self signed certlisten
set incorrectly, and the default_server is answering instead of your server contextTry removing the try_files
, and then check your nginx access/error logs. It should tell you if it's unable to find the file you're trying to access. That's controlled by the log_not_found directive, which defaults to on.
The last parameter of try_files is an internal rewrite to that location. Nginx will start a new location search for /index.html. From the documentation:
> If none of the files were found, an internal redirect to the uri specified in the last parameter is made.
Use a nginx map to operate based on the HTTP Useragent. Then add an if directive for your port 80 server, something like:
server { ... if ( $mapped_useragent ) { return 301 https://$host$request_uri; } }
You can read up on how nginx processes server names here, it might help you separate your subdomains (what I'd recommend doing): https://nginx.org/en/docs/http/server_names.html
You could use if
in that location to look at the $http_host
variable, if it doesn't match what you want, return a 404 (or whatever status you want):
location ^~ /sonarr { if ( $http_host !~* server.example ) { return 403; } }
Separating your subdomains seems like a better plan long term though.