Or you could do this with nginx.
A few lines of configuration and you're set. Beats developing and maintaining an application.
I replaced a PHP app at work with this 8 months ago, I did not have to look at it since.
Here's nginx documentation on HTTPS. Please be careful on file permissions on the private key to limit the access of non-essential users to it.
Then, with OpenSSL, you can generate a public/private key and Certificate Signing Request - details here.
Permalinks may be a thing of the application you're proxying to, they aren't an nginx thing.
In your example, let's follow the logic in your config:
root
value and try to load a file or directory in /usr/share/nginx/html/hello-world
, then it tries to load /usr/share/nginx/html/hello-world/
, and then if it can't find those files/dirs it performs an internal rewrite of the URI to /index.php?args.You can read the details in the try_files documentation
What? We use nginx as a reverse proxy, with an HTTPS connection all the time on our servers at work.
What about this doesn't work for you?
Have a look at the docs for the ssl module, specifically the variables section. It gives a bunch of outputs that you can add to logs, including $ssl_client_verify
which contains the failure reason.
OK. As long as they're on separate IPs, you can do both TCP/UDP and HTTP proxying. It'd look something like this:
http { server { server_name www.example.com; listen <www ip address>:443 ssl; <ssl parameters>
location / { proxy_pass http://192.168.2.20; } } } stream { server { listen <tcp ip address>:443; proxy_pass 192.168.2.21:9050; } }
The nginx stream module is how nginx handles TCP and UDP proxying, you can read up on it here: https://nginx.org/en/docs/stream/ngx_stream_core_module.html
The reason for my question earlier is, you cannot have a HTTP server context listen on the same IP:Port as a Stream server context.
your nginx site conf probably only has http setup.
https://nginx.org/en/docs/http/configuring_https_servers.html
ideally you'd want to set your http conf to redirect to https :
return 301 https://site.tld$request_uri;
Use a socket stream in nginx instead of http, that way nginx won't be modifying the data in any way.
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
It's probably unhappy that the server is reporting to be running nginx 1.4.3 and thinks the code for the HTML tables looks suspect. No massive problems with that but they should consider updating nginx
You'd have to dig up the reasoning of each team, but let me make this additional point: if you are building these applications yourself, there are very often configuration flags you can pass to the program which tell it where to install itself.
In the specific case of Nginx, see for example: https://nginx.org/en/docs/configure.html Example of parameters usage (all of this needs to be typed in one line):
./configure --sbin-path=/usr/local/nginx/nginx --conf-path=/usr/local/nginx/nginx.conf --pid-path=/usr/local/nginx/nginx.pid --with-http_ssl_module --with-pcre=../pcre-8.39 --with-zlib=../zlib-1.2.8
See how the binary and configuration file can be specified? There may be default values here that the application developer selects, but the distro maintainer may change them to better align with other applications packaged for the distro, or you may want to install a different version into a different location.
So, you can see cases where the ability to be flexible on exactly where something is placed in the filesystem can be beneficial.
Yea, nginx or Apache would have no problems doing something like this.
Your specific problem with the Location header would be solved with the [proxy_redirect](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_redirect) directive in nginx or [ProxyPassReverse](https://httpd.apache.org/docs/2.4/mod/mod\_proxy.html#proxypassreverse) in Apache.
There's nothing magical about it - ultimately you're pulling a file from a server, e.g., https://xkcd.com/atom.xml - you're getting that atom.xml file and it's trivial to log the ip.
e.g., See:
https://nginx.org/en/docs/http/ngx_http_geoip_module.html
So it would be easy for a podcast enthusiast to get the (not precise) locations of his listeners. When I hear a podcaster bragging about so many listeners around the world I get the impression it's paying listeners and info would be from payment information. If you're concerned about it, use a vpn. If you're paranoid about it, idk, fetch all your podcasts after hijacking someone's session at a starbucks parked 5 blocks away with a panel antenna and make sure to use a cheap sbc burner you only use once and can burn after using.
You could create a custom log_format, and add the $args variable to that. Or you could, in the php location, add a new header which returns the value of args, like:
add_header nginx_args $args;
In your given example of /hello-world, $args is empty. Nginx gets to the php handling location by using the fallback in try_files to rewrite the request from /hello-world to /index.php.
Now, the question is, how does your php application know that it was /hello-world originally!? Well, that's handled by this part of your config:
include fastcgi_params;
If you look at that fastcgi_params file, you'll see nginx setting a number of fastcgi parameters. One of them is REQUEST_URI, which contains the original, pre-try_files rewritten URI (or path).
You can tell them that you can't find any information on the module and that it isn't listed in the nginx directive index: https://nginx.org/en/docs/dirindex.html
Sounds like your vendor needs to provide more information.
It sounds like you need to make your desired server context in nginx the default one. ALB seems to be testing based on IP address and not passing along a HTTP Host header, and is getting the default nginx server context.
server { listen 80 default_server; listen 443 ssl default_server; }
https://nginx.org/en/docs/http/ngx_http_core_module.html#listen
return 444
is a special nginx return code, which immediately cuts off all communication with the client. If you want to return a status to the end user, use a normal HTTP 403 repsonse code. Used in conjunction with the error_page directive you can spend back a stylized HTTP 403 response.
Fwiw the nginx version in the Buster repos seemed to have performance issues for me too. When I updated to Bullseye nginx throughout trebled. At first I thought it was because I'd migrated to zfs and tuned the crap out of it, but using mdraid+xfs was even faster, so I think there was some nginx bottleneck or the other. I'd recommend sticking to the upstream mainline repo if you can.
You're changing the URI from /acurite/ to / when you proxy it. The application you're proxying to is likely sending back HTML with hrefs/src pointing to /, instead of /acurite.
To verify this is the problem, open your browser network debug tab and look at the full path of the files which aren't loading. Likely it'll be /js/example.js instead of /acurite/js/example.js, or something like that.
See if there is a public URL or base URL setting in the application you are proxying to. It's best to have the backend application send the correct html in the first place.
If you're unable to get the backend application to send the correct HTML, you can try and do fixup in nginx. You can use the sub_filter directive to rewrite the response html, you can use the proxy_cookie_path directive to fix cookies, and the proxy_redirect directive to fix HTTP header redirects.
You can hide the version, but it looks like you can only disable the Server HTTP header if you have the commercial subscription: https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens
Otherwise you'd need to download the source, modify it so it doesn't send the Server field, and then recompile.
> how can you direct nginx to return a specific file (also I need to be able to set the content-type) for anything that matches a given location block?
Use the alias directive:
location = /api/product/whatever/details { alias /etc/nginx/api_experiment/response_for_product.json; }
The =
modifier on location means only that exact URI will match. You can let it be a bit more flexible by removing the =, or using another flag. location flags documented here.
If you have a bunch of products, and don't want a bunch of locations, you can use a regex location and capture the product id or something and use that in the alias, eg;
location ~ /api/product/(?<product_id>[0-9]+)/details { alias /etc/nginx/api_experiment/$product_id/response_for_product.json; }
regex locations, try_files and alias don't get along if you use them together: https://trac.nginx.org/nginx/ticket/97
Does nginx show any reason as to why the authentication failed? You can give <code>$ssl_client_verify</code> as a second parameter to <code>return</code> to have the error message sent to the client. Or, if you set ssl_verify_client
to on
, nginx should automatically send the reason via a custom HTTP code.
You can also try having Firefox load the client certificate from the macOS system keychain via security.osclientcerts.autoload
(experimental support in Firefox 75).
The nginx equivalent of RewriteRule is rewrite and it works almost exactly the same way.
rewrite "^(.*?)\.[0-9]+\.(css|js|svg)$" "$1.$2";
rewrite ^(/.).html(\?.)?$ $1$2 permanent;
The above rewrite will never contain any query arguments. Rewrite can have query arguments (?.*) in the destination, but the first parameter only operates on the path portion of the URL. It's not going to cause you any issues, since you have a ? on $2, but $2 will never actually be filled with anything.
I think the whole config could be simplified a bit, try:
server { server_name example.com www.example.com; listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / { root /var/www/html/example.com; rewrite ^/(about-us|gallery|contact-us)/? /$1.html break; } }
Notes:
index index.html
is the default value. https://nginx.org/en/docs/http/ngx_http_index_module.html#indexIn case anyone is wondering what this has to do with Perl, the server is driven by the nginx HTTP Perl module: https://github.com/newsnowlabs/dockside/search?l=perl
Since you are using nginx for this, nginx have this feature built-in, haven't personally used it before.
This is a perfect use case for the map module. It supports regexes and default values, so you can create a new variable with the desired php version based on the $uri
variable easily.
> nginx: [emerg] "stream" directive is not allowed here
The stream directive is at the same level as http
and mail
. If you are including a stream
context under http
, it will throw that kind of error.
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
A plain location is a literal string match, you need a regex enabled location. Try:
location ~ /scorpa/sa/component/(.*) {
Location docs: https://nginx.org/en/docs/http/ngx_http_core_module.html#location
Sorry, I'm not going to pull up the webpage. Your requests to /marlin/ are being proxied correctly it seems, however the HTML response appears to reference /static/. /static/ is outside of /marlin/, and unless nginx knows where to find those files, are probably going to result in a 404.
Use your browsers network debug tab to see what files aren't loading. Then update your application's code to ensure all of the HREFs (src=, href=, etc) are directory relative or use /marlin/ at the beginning.
You can try and fix this with nginx's sub_filter module, but it's not elegant, and you'll probably have to do a lot of tweaking to get it working. https://nginx.org/en/docs/http/ngx_http_sub_module.html#sub_filter
Well because Windows can't just sit there and ossify, and "Agile" is how all software works now. So instead of a big lift every 3 years that may or may not have what people want, and requires a big campaign to switch over to, they just do a major once a year, and it gets installed when the old version is no longer supported.
The better question is why you are hacking IIS into a client OS :). I'm assuming you need only a handful of connections, because Win10 has a limit there (20 max), unlike server.
Unless you are doing something very IIS-y, something like an nginx-for-win may be a much better fit, and won't get uninstalled: nginx for Windows . Or even better, running these on Ubuntu so you don't have a connection limit.
Consider this your opportunity to rethink the software architecture and application choices / operational model. Your current operational model is a bit brittle, because it relies on adding an OS feature that the OS does not, actually, support; and subsequently removes on major upgrades.
I'd bet that you have another server
context which is answering the request. Try running sudo nginx -T
, and look for multiple listen 80
server contexts. Either combine them, or use server_name
to distinguish one from the other.
The concept is simple enough. The proxy listens for requests and forwards it to the correct service or host (which essentially remains hidden).
Other than that you’ll learn more setting on up than reading about it. I use nginx which is one of the more commonly used ones. Most of the information you need is in the documentation on the website.
Your nginx config is only setup to respond to www.domain.com and domain.com.
Changing:
server_name alexventura.me www.alexventura.me;
To:
server_name alexventura.me *.alexventura.me;
Should suffice.
https://nginx.org/en/docs/http/server_names.html#wildcard_names
Your SSL certificate should also be a wildcard certificate and the DNS record for restaurant1.alexventure.me should be something like:
Host: restaurant1
Type: CNAME
Value: alexventura.me
Where your root (@) A record points at your VPS.
I have always been a back-end guy. Getting a praise for the user interface literally made my day! Thank you!
It does use nginx's ngx_http_mp4_module. So, as it's stated, it is pseudo-streaming the file.
Forward connections from router to server machine, then configure server to resolve locally. In nginx, the directive is resolver.
The scheme would be something like:
dynamic.dns:80 -> router:portA -> server:80 -> upstream_loopback:portB.
Where portA and portB is non-privileged port.
If you're logging your upstream address, status, response time, check those logs.
If you aren't, then you probably should add that to your log_format. eg;
log_format up_head '$remote_addr $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' 'upstream $upstream_addr $upstream_status $upstream_response_time';
Then, double-check those logs to see if nginx is returning the 413, or your upstream.
You can definitely do this - I found things like certbot that try to do it all with scripts always seemed to break. A couple other options are (if you're comfortable) to do a little scripting and use the lego client - lego let's encrypt - it seems really solid, is crossplatform, and I always use DNS challenges as they seem to be much more reliable. Once you get the cert using lego and dns solving (don't need web proxy / apps up yet for that) you can followthe nginx instructions for configuring the ssl. If you want something that "tries to do it all for you" (dynamic dns, let's encrypt (using lego), setting up the nginx proxy, etc) you can see if my project is helpful - Bitsii Bridge - it tries to wrap this all up in an application to handle it for you.
You could use the alias
directive:
location /list { alias /var/www/; autoindex on; }
If you only want to show only the directory without allowing any subdirectories you can use location = /list
instead as this would break any links but I wouldn't recommend using that - it feels just ugly. See the reference for the autoindex directive here.
or
https://nginx.org/en/docs/http/load_balancing.html
You would port forward 80 and 443 to the load balancer. The load balancer would be configured to route to either Bitwarden to Synapse depending on what the Host header is set to in the HTTP request and what's in the server_name field for the TLS connection.
You will then need to setup separate DNS entries for the two services.
Well, I'd say the issue has nothing to do with nginx, but with the way you have those containers set up. It sounds like you have the cotainer ports exposed to all network interfaces including the public one, so they are reachable from anywhere, and the request doesn't even hit nginx (which listens only on port 443 based on the config you provided). When you run docker ps
or docker port
, you'll probably see something like `0.0.0.0:25565->25565/tcp' - the zeroes mean that the container port is exposed on all interfaces.
When you want to use nginx as a reverse proxy, you generally let the services (in this case docker containers) listen only on localhost:someport, so it cannot be reached directly from the outside. The request then has to go via nginx, which will then pass the request to the service.
By the way, you're trying to proxy HTTP requests to the minecraft container, which probably won't work - I assume minecraft uses some custom protocol over TCP, and not HTTP. If you really want to hide it behind nginx, you'll have to use the stream module instead of the http
module.
You can use nginx's built-in access control module to restrict traffic for intranet domains. So, for internal websites, you would simply add deny all
and allow 192.168.0.0/24
declarations. You could create 2 nginx instances (one internal, one external) so you wouldn't have to play too much with the config, but I believe configuring your subdomains with that module would allow you to have only 1 running nginx instance. As it's pretty essential for you to reach your websites, maybe it'd be good to run this alongside Pi-hole.
I don't run Pi-hole (yet) at home, but I think there's some way of configuring it to resolve DNS entries to a local intranet address.
I'm not really sure what you're asking there.
In a normal web application structure, you have a backend application (built with Express, Flask, what have you), which listens continuously for HTTP requests from the local network or strictly on localhost. In front of that, typically on the same machine (though in larger setups, this may be on a different system), you have nginx, which uses nginx's reverse proxy functionality to handle remote requests over HTTPS, and proxy_pass
them to the underlying application. In this setup, nginx is just a middleman. You can use it to modify, rewrite, or reroute requests as needs arise, but for the most part, nginx is just going to forward requests from remote connections to your backend web application directly.
You can think of nginx as a messenger. Instead of clients talking to your underlying application directly, they speak to nginx, who then speaks to the application on their behalf, and relays the response of your app back to the client. If needed, nginx can modify either message in transit, but for the most part, nginx will just add some markings to the message to state who it came from, so the underlying application knows it's not just nginx talking to it.
Nginx does have a configuration API, but only in the paid (nginx plus) edition.
I'm not sure if I understand your use case correctly, but if you're heavy on microservices, have a look at Traefik, as it is build with exactly that use case in mind.
You can handle this with session tracking in the backend application through cookies or session token as /u/Xibby previously mentioned, but then all servers need to be aware of all sessions / cookies on all servers. You could also make the cookies / sessions server specific but then you need to make sure that the traffic from the user always ends up at the same server through load balancing.
In NGINX it's called "sticky cookie" and in F5s big ip it's called session cookie
​
Happy hacking
You will need to install a web server on your computer. You have many options with these being most common/popular caddy / apache / nginx
If you want to be able to use a URL instead of an IP address you will need to get a domain name. If you do not have a static IP address from your ISP you will need to setup a dynamic DNS. There are free options for both domain names and dynamic DNS providers.
nginx has some excellent documentation than.
i suggest taking a look at their http load-balancing example first - think it gives a good overview.
For medium-scale apps, I'd personally default to using the same puma cluster that serves your primary web traffic, though at a certain point it probably still makes sense to use separate server pools even if they're running identical code.
Nginx should be happy to handle the main connection distribution work (https://nginx.org/en/docs/http/websocket.html); you could put a TCP load balancer in front of that if needed.
My only production-level experience with a proper ACa setup is from when I was at Basecamp, where it ran as a separate cluster at least partly for historical reasons: pre-release/early versions were not very friendly with running in the same process as normal web requests.
Can you show us your configuration file?
What about https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name?
Check out nginx's webpage for the default_server directive:
https://nginx.org/en/docs/http/request_processing.html
The default_server directive is for when nginx doesn't know where to to send a request because the host
field in the request doesn't match any of the defined servers. If you just want all unknown requests to go to your https site, then you would set that server as the default_server.
Depends on how you evaluate it. From what I know of the Nginx buffers, smaller size may be better for memory consumption, while larger one may save some CPU performance, which means that some part of the response exceeding the buffer size will be preserved till some of the buffers are sent to the receiver. First number in the gzip_buffers is number of the buffers, while the second one is the buffer size. As it stated here (https://nginx.org/en/docs/http/ngx_http_gzip_module.html), buffer size is set to a memory page size, which is a thing to keep in mind, as, on a service with high load, one may face with performance issues or gains depending on what number they set it to (think virtual memory fragmentation and how OS deals with it). Depending on the load of your service, as well as the kind and size of data it serves, and the platform you run it on, numbers may vary. I'd suggest to run several performance tests with different parameter values and find the configuration that suits your situation best.
A somewhat uncommon, but not unheard of request, is to be able to mirror traffic to multiple backends. Previously you could do this with post_action
, but that's a really poorly documented directive, and I suspect pretty buggy.
The new module will let you start a subrequest to issue the client's request (with or without the request body) to a different URL.
Hmm yeah, that's fair. I'd try using the following settings for the reverse proxy
https://github.com/qbittorrent/qBittorrent/wiki/NGINX-Reverse-Proxy-for-Web-UI
Additionally, you probably need to enable the "Use HTTPS" within qBittorent so that the forms & URLs are generated correctly. This however qBittorrent will be using a self signed certificate. I believe nginx will ignore these cert errors by default (see proxy_ssl_verify)
Also, let me just give you some more pointers to what to research:
HTTP is the protocol all web applications speak - browsers send requests and servers respond, though in the new HTTP/2 standard the distinction is quite a lot more blurry.
WSGI is the basic API a web server uses to communicate with your Python application. You don't often need to touch it directly, because the framework you use will wrap it for you.
This tutorial walks you through building your own web framework over raw WSGI, and explains how the pieces fit together.
Gunicorn is a very fast WSGI server, and something you'll probably want to use in production, combined with an (even faster) proxy server like Nginx.
> You have to set the local one and they assign you a public one.
This is not how it works at all. The local port box actually selects the local port on which OpenVPN itself will bind. This is useful if you use strict firewall rules and need the port OpenVPN uses for UDP traffic to not change. It has nothing to do with port forwarding.
The way port forwarding works is that a random port over 10000 is assigned to you, and it is a 1:1 mapping (so, if you get port 12345, then you have to connect to the PIA gateway on port 12345 and the traffic will come to your computer on port 12345). The reason it works this way is that since the IPs are shared, it is not possible to let people select which port they want as everyone would fight for the default ones, but it is also pretty bad for security because it would make it trivial for someone to portscan the gateways and identify common services.
If you can't change the port Plex uses, a possible solution to this could be to use your firewall or a reverse proxy software to relay it back to the correct port. I don't know how Plex works in particular, but you can also use nginx or HAProxy to forward the traffic to the right place.
Personally I use the firewall option, as it is pretty easy to do on Linux. I don't know if the Windows firewall can do this however.
When there are no regex locations the longest matching prefix is selected, the order in the config file does not matter.
> To find location matching a given request, nginx first checks locations defined using the prefix strings (prefix locations). Among them, the location with the longest matching prefix is selected and remembered. Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used. If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used.
https://nginx.org/en/docs/http/ngx_http_core_module.html#location
Hmm, that's odd.
I was suspecting you had a missing slash at the end of the location specifier (i.e., that it should have been location /static/
instead), in which case the trailing part of the location and the alias target would be the same and in those cases it's always better to use root /parent/path/of/the/loc/spec
instead, as is strongly suggested by the alias doc.
If you also have regex locations defined, there are some additional complications involved in the evaluation order. Here is an excellent resource explaining how nginx's selection algorithms work.
I don't see an error log defined in the config you pasted, but here are the docs for it: https://nginx.org/en/docs/ngx_core_module.html#error_log
If you have an error_log for your server context, try bumping the verbosity up to info or debug. If you don't have an error_log defined, define one. I was seeing permissions related errors at the default verbosity (error), but maybe you'll see more at higher levels.
This nginx doc has what you're looking for:
> When searching for a virtual server by name, if name matches more than one of the specified variants, e.g. both wildcard name and regular expression match, the first matching variant will be chosen, in the following order of precedence:
> 1. exact name > 2. longest wildcard name starting with an asterisk, e.g. “.example.org” > 3. longest wildcard name ending with an asterisk, e.g. “mail.” > 4. first matching regular expression (in order of appearance in a configuration file)
> EDIT: Oh crap.. I think I see it now.. The scripts have static links to /
Damn.. I should've thought of that.
> Is there even a way to fix that?
You found the problem quick, nice! There are a few ways of handling this:
You could presumably use nginx's stream proxying feature:
https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html
However I struggle to see any real use case of proxying it. If you expand on your requirements you may get a better soln than this config as I think we may have a bit of an XY problem here.
Assuming that your nginx instance is proxying to a separate server, it only ever knows if the ip:port are no longer responding to requests. It doesn't have an understanding that a server is down, or a service, but just that it is unable to communicate.
In the open source version of nginx, this sort of health detection and failover is handled passively, that is to say that a request must fail to a proxied server before nginx marks it as down. In the commercial version (and with some 3rd party modules), there are active health checks, which ensure that before a request from a client is proxied, the backend server it is proxying to is healthy.
In the open source version, these are tweaked in two places;
upstream
module, you can set how many fails it takes before a upstream server is marked as "down", along with how long it'll be "down" for, mark a server as backup, etc. Documentation here.upstream
group.Make note of any mention of "commercial" in the docs, as those directives require the paid version of nginx.
ive learned that the correct answer that i want to hear from a junior guy is that I will go to nginx.org and look at the install guide, or i will fond a howto guide on the internet. I dont want to hear "If you show me, I can do it" every bad admin i have had has given me this answer and translated it means "you will have to hold my hand for everything" . Bitch, you have the eighth wonder of the world on your desk, the sum of all human knowledge that man has dreamed of for centuries, and you ask me how to do it?
Another option is to use the proxy_bind directive with the transparent flag. Should work for both Stream (l4) and HTTP (l7) proxying:
Though how the proxied application uses that IP may mess some stuff up.
are we talking about layer 4 (tcp) or 7 (http) proxying?
For L4 (TCP) you can enable to `proxy_protocol` by adding this attribute to the `listen` directive.
Then
```
proxy_protocol on;
set_real_ip_from $proxy_protocol_addr;
```
see: https://nginx.org/en/docs/stream/ngx_stream_realip_module.html
For L7 (HTTP) you can use different headers like `X-Real-IP `
The backup server will take over as soon as NO OTHER primary server is able to take over the request.
See:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
"marks the server as a backup server. It will be passed requests when ALL OF the primary servers are unavailable."
You could try one of these options:
1) Use the map module to convert the API keys to some variable (default 0 = not authorized, or 1 = authorized for known keys). You would then check this variable to decide whether to allow or deny the request. You could store the map in a separate config file, but the downside is that you would have to maintain two maps, if you also want to map the api key to a rate limit zone.
2) Write a simple backend service, which would do the check and then use the X-Accel directives to 'return' the request back to nginx for further processing. The way this works is you proxy_pass
the request to the service, and if the response from that service has the X-Accel-Redirect
header, nginx will then re-process the request again in a new location matching the URL specified in the value of that header. There's also a X-Accel-Rate-Limit
header which you could set in the response.
There is still the default server block configured in one of the config files, remove it.
After that, append default_server
to the listen
directive.
Also don't use ""
for server_name
, as it has a special meaning, the typical placeholder for catch-all servers is _
.
I'm not aware of a HA proxy solution, but I know that it works with streams
and ssl_preread from nginx.
Based on (sub)domain name, you can select different upstreams.
https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
Are you familiar with absolute_redirect directive? It sounds like what I want but also does not work correctly. They mention this as a possible solution here.
And by the way is there a possibility to debug those rules in nginx somehow?
It sounds likely that whatever php is executing, is not doing so quickly. Set up a new custom logging format which has these additional values:
This will confirm that when you proxy to php, things are slow, and not that nginx itself is slow. There are several things you can do to help alleviate that. You can set up a response cache in nginx (see: fastcgi_cache if you're proxying with fastcgi_pass), but make sure the content you're caching should be cached, and that you have the cache key set correctly.
For PHP, since 5.5 opcache has been available and I believe enabled by default. Check out the docs and make sure that your installation is using opcaching properly: https://www.php.net/manual/en/book.opcache.php
Once you've run down those paths, and determined that whatever you're running on php is slow and not nginx itself, start googling for <app> optimization guides.
I specifically use nginx for windows version 1.21.1, tho there have been updates since. I got it from here. Then I have a bat file that is just `cd C:\Program Files\nginx-1.21.1 start nginx`, and I scheduled this to run at startup with admin privileges.
you're asking the wrong question. you should be asking "how to do i make it so that my services are reachable via URL paths on the default HTTP port rather than each on its own port?" the answer is that you can configure your HTTP server to direct various virtual paths to whatever resources you want (either on the same server or reverse-proxy to a different server). if you're using nginx, read this. please understand that going to mydomain/service still requires you to "expose" a port, specifically the HTTP default port 80.
Your scenario of having different paths serve different front-end apps is fairly common, but how exactly to accomplish it depends on how you're hosting the front end. For example, if the app is served from Nginx you'll want to look into the alias directive.
Also, have you looked into the Vue Migration Build? Looks like you might be able to use it as a way to combine Vue 2 and 3 code in the same app.
Ah cool okay. Not sure that building from source would give you any benefit for open ports.
Disclaimer: I'm an nginx employee, and we recommend using the prebuilt packages from nginx.org as best practices.
So, to be clear, is your NGINX behind a private network? As in s standard ISP router?
Do you have a domain pointing to the IP of the machine with the NGINX?
If you are using https the port may or may not matter. By default, if you access a domain via https://example.com the port is actually 443, but that doesn't mean you can access it via https://example.com:5004.
What you may want to do is port forward the requests from a desired port on the router to a machine inside that network and then having the ports configuration ok the docker-compose configuration.
If you set the ports like this: 5005:443 this means that the requests made to the host machine on the port 5005 will be forwarded to the NGINX container on the port 443. From here what you need to do is have a simple SSL configuration that listens to the port 443, as show in this NGINX documentation.
From here just configure the rest of the reverse proxy to do whatever you want. Regarding your certificate question, the one from Cloudfare can be used, just remember to make it available to the NGINX container via a volume mount, for example, and have it correctly configured on the configuration you are creating for your use case (pay attention to the certificate key and certificate path).
If you'd like me to take a look at your current configuration you can share it as well!
Fixing markup since old reddit doesn't support triple-backtick code blocks: server { listen 80; server_name app-one.domain.com;
location / { proxy_pass http://app-one:3000; } }
Defining an upstream is most useful if you have multiple servers that you are proxying to. Additionally, defining an upstream allows you to set a few additional connection parameters, such as keepalive, fail timeouts, maximum number of connections, etc.
Check out the upstream module docs, and the server directive, to see if you're interested in any of those: https://nginx.org/en/docs/http/ngx_http_upstream_module.html
You can use the proxy_bind directive with the transparent flag to make it seem like the original client IP initiated the request, instead of nginx.
An nginx.com doc on this kind of thing, though a bit different than what you're looking to do I think: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Nginx does have a strong open source community but it is no longer a pure open source project with some features only available in the commercial Nginx Plus version. Specifically fine grained cache purging For that reason I 'd stick to Varnish for FastCGI caching unless it's a very simple use case.
Haven't tried apache but nginx seems pretty easy and simple to me: Go to "Setting Up a Simple Proxy Server" on https://nginx.org/en/docs/beginners_guide.html and follow the instructions, you just need to use the proxy_pass instruction
Also check this guide https://www.privex.io/articles/setup-tor-hidden-service-website/
so my example, /demo/* will redirect to the given path.
the match rule in nginx have ~, =, etc for these cases.
https://nginx.org/en/docs/http/ngx_http_rewrite_module.html
might also be what you are looking for.
HTTP Basic authentication exhibits as a pop-up on the client, like you implemented with auth_basic
in nginx. You would have to look at the documentation for what you're proxying to, to see if it accepts HTTP Basic auth. Some applications won't support it, but will get confused if you say, "Hey, here's this Authorization header, do something with it".
In this case, whatever is running on 127.0.0.1:2246 was either rejecting the Authorization header nginx was sending, or something else weird was going on.
You can add the $upstream_addr
, $upstream_status
and $upstream_response_time
variables to your logging format (See: https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format). Doing so can help you determine if nginx is generating the error/unexpected response, or your backend application is.
As already mentioned. Apache is a good choice. For basic needs, you can also consider Nginx. Their documentation is pretty good once you get going. https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Load balancing: https://nginx.org/en/docs/http/load_balancing.html
For business uses with advanced health reporting/etc, you can also consider Nginx Plus - though that's a licensed feature that's not cheap if you don't really need it.
For our case, we have a pair of Active:Active Nginx servers with a basic HTTP/HTTPS load balancer in front of them. They handle the security/proxy/actual load balancing.
Source: Use Nginx plus for a reverse proxy and love that is pretty much just works.
Fixing your code block for old reddit:
location /plausible { rewrite ^/plausible/?(.*) /$1 break; proxy_pass http://localhost:8000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }
Nginx will fix some things with its default config, but it won't rewrite the response body unless you explicitly tell it to. There are two things you should look at:
It's best to have your application send the correct base URI or correct relative HTML, but you can do fixup in nginx if you need to.
Many people choose to use a subdomain, eg; plausible.mydomain.com
, as it allows you to use / instead of a subdirectory.
Does this application use websockets? If so, try: https://nginx.org/en/docs/http/websocket.html
Review your error log, try adding $upstream_addr / $upstream_status / $upstream_response_time to your logging format to help ensure you figure out if nginx is returning an error, or an upstream.
I don't know what's best for you, just was curious what the setup was.
Try changing the following:
- proxy_pass http://phpmyadmin; + proxy_pass http://phpmyadmin/;
This will change the path that gets proxied from /pma/something to /something. However, phpmyadmin may not generate hrefs correctly for that new URL. It may try sending a link to /somethingelse instead of /pma/somethingelse.
If you can get your phpmyadmin docker container to accept requests in /pma/, that would be good. You can also run it on a subdomain (eg; server_name pma.somehost.com
).
You can also try using nginx to rewrite the response HTML with the sub_filter module, though that can be tricky.
Since you don't provide any useful environment details, here's an equally bland boilerplate config:
server { listen 80; listen 443 ssl; listen 3000;
ssl_certificate_key /path/to/key; ssl_certificate /path/to/cert;
location / { proxy_pass http://127.0.0.1:3000; } }
Here are the nginx docs: https://nginx.org/en/docs/
> upstream: "http://127.0.0.1:3000/"
This looks like you still have the nginx container pointed toward localhost where you might need to address it based upon the medianet IP address instead. Edit: Other options though I'm not sure the Docker version you're running.. https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container
Going back to your original nginx config, adding SSL is pretty easy.
https://nginx.org/en/docs/http/configuring_https_servers.html
Basically, add the proper port, use the ssl on directive and point it toward the cert and key you're wanting it to use.
I'm seeing *.domain.com
as your only entry in server_name
If you want to allow domain.com as well your entry should be:
server_name *.domain.com domain.com;
If you want a deeper understanding on what nginx is choosing and why, you could always turn on debugging in the error logs. See: https://nginx.org/en/docs/debugging_log.html
sub_filter
has three things that might be limiting it from working for you:
docs.
nginx isn't just for proxying http(s) traffic and hasn't been for a while - it can proxy 'unknown' TCP/UDP traffic straight through to other backends. It would (should?) have no problem sitting there and passing MQTT traffic to mosquitto or whatever.
Only caveat is that nginx can't redirect those TCP/UDP streams based on hostname like it can with SNI (not all TCP traffic even uses hostnames as a concept) so you're limited to one recipient of non-web traffic when doing this.
From the <code>proxy_pass</code> docs -
> A request URI is passed to the server as follows:
> * If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive:
location /name/ { proxy_pass http://127.0.0.1/remote/; }
> * If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI:
location /some/path/ { proxy_pass http://127.0.0.1; }
root is absolute for the local server directory pathing, every time you invoke it you set the root for that location (and any nested under it).
I bet you can get away with removing the root and it working, since you have root set in the server
context.
There are two directives you can use to tell nginx where to look for local files:
For example:
# Incoming URI: /images/path/to/file.jpg
location /images { root /usr/local/www/; } # Looks for /usr/local/www/images/path/to/file.jpg
location /images { alias /usr/local/www/; } # Looks for /usr/local/www/path/to/file.jpg
I'm not using meshcentral, but I think I added this line since I need to pass the real IP for some kind of logging. Try it out, if it doesn't work, you can aways remove it. Here is the documentation https://nginx.org/en/docs/http/ngx_http_realip_module.html
I agree, video tutorials are boring as hell. I do best when I'm working at my own pace with a good reference guide.
w3schools is a great resource to learn html, css, javascript, and the like. It gives examples of what each element does and is an invaluable reference even after you've mastered the concepts.
MDN is another great resource that I've been exploring as I learn Webassembly, WebGL, and WebXR for my own web-based game development.
Personally, I started by writing a boilerplate index.html in notepad, launching it in a browser, and following the tutorials on w3schools. Eventually I started self-hosting entire websites with nginx as I continued to learn.
Basically, just have fun with it. Make your own challenges and explore.
> disable non-SNI requests
If your nginx is up to date, which it should be, you can use ssl_reject_handshake, which will abort the TLS handshake without sending a certificate if the SNI doesn't match any of your server_names.
Use the Mozilla config generator to get an up-to-date list of protocols and cipher suites, the ones listed in the "Hardening Nginx" article are out of date now.
I know it's not an answer to your question, but this is the reason why I typically use upstream repos in case they offer a stable version of certain software. Specifically, I do this for nodejs, gitea and nginx. Upstream releases (even for stable branches) tend to be faster and I do trust the packaging teams of these software to be on top of releases as well as security.
Here is the nginx.org doc on setting up HTTPS: https://nginx.org/en/docs/http/configuring_https_servers.html
If after you've checked your config, you're still having trouble, it could be one of the following:
sudo nginx -T
(case matters)server_name
set incorrectly, so the default 443/ssl server context is answering, returning the self signed certlisten
set incorrectly, and the default_server is answering instead of your server contextTry removing the try_files
, and then check your nginx access/error logs. It should tell you if it's unable to find the file you're trying to access. That's controlled by the log_not_found directive, which defaults to on.