> Restart Apache in every 10 requests? :) Oh Lord.
You laugh, but Apache actually has first-class support for this feature: MaxRequestsPerChild.
That right there probably solves some 99% of all issues people have with node. But, you know, muh async and web scale, so no Apache.
This is a setting in the web server. In Apache, for example, it's the DirectoryIndex directive: http://httpd.apache.org/docs/current/mod/mod_dir.html
By default, that is set to index.html. You could change it to whatever file you choose - index.php, script.cgi, or even bite.me.
It's useful to have a DirectoryIndex so that if someone doesn't explicitly choose a file (no one ever does when going to a web URL), it chooses the default file to display.
Port forwarding essentially brings out the device in question to the public network. Really only a routing trick.
Reverse proxying requires the client to actually connect to the proxy, which then connects to the resource in the internal network.
With reverse proxying you can inspect the incoming query before deciding on if it's going to be let through. Wonderful extra layer of security. Any denial of service attack will hit the proxy and not the application server. A proxy is simpler, and will withstand more traffic than exchange.
And you can modify the outgoing content as well. One favourite of mine is to use apache as a reverse proxy in front of a webapplication. The proxy server will deal with encrypting the traffic, lessening the load on the application server.
I've set up a clustered apache solution that proxies all content that is supposed to be published on the internet. A single place to block IP:s that attempt portscans and bruteforce attacks on the servers.
Finally, it's easy to keep links intact on the proxy, no matter wherever the internal resource is moved. Simply update the address in the proxy, and your end-users won't even notice that something moved. Those services that require actual downtime for updates can be pointed to a "service down"-page, where you can update your endusers on how the service break is going.
BTW. I recommend apache and mod_proxy over that Citrix thingy. Unless you have a support contract with Citrix, of course.
Nothing except that there's some bug somewhere within Alienblue or the website you were looking at.
"It's working!" is the default Apache page when it's first installed. Seems likely that you clicked on a link to some site that had a new/broken apache config file.
And now, as of version 2.3, NameVirtualHost is deprecated and doesn't do anything. Apache just configures name-based virtual hosting automagically based on your VirtualHost stanzas.
http://httpd.apache.org/docs/2.4/mod/core.html#NameVirtualHost
So you figured out how to configure it properly just in time to not need it anymore.
A hybrid thread-event MPM has been part of Apache for ages, and it's production quality in 2.4. See mpm_event for more details. Of course, it's still vastly outperformed by nginx in some scenarios, but it's good to know that it's there.
Things I have so far (about 10 minutes work, and some more time on writeup)...
Easy suggestions:
Beyond all that, I presume I don't have to tell you that when live, you should keep the bulk of coins offline (in cold store), do I?
Expected: A tool that let's me generate fake apache access and error logs.
Found: Tool to create Apache LogFormat syntax
Still nice tool :-)
> edit: Bonus points for a browser-based download and upload solution.
The solution you are looking for is Webdavs. It's Webdav + SSL.
It's a standard way to manage files over a browser. All major operating systems support mounting it as a harddrive or volume. You can use a variety of clients to access it and you can upload/download files via a browser.
In addition to this you can use it to augment a programmed web-based uploader/downloader application.
You will have to use Apache with mod_dav. http://httpd.apache.org/docs/2.4/mod/mod_dav.html
You can use any sort of password authentication or user management scheme that Apache supports. It's extremely flexible and powerful.
The major downside of it is that if you are using file-backed storage then all the files uploaded/downloaded must be read/writable by the apache process. So POSIX file system permissions like you can use for nfs or sftp won't be useful.
Definitely this makes it easier. But the author of that article keeps harping on how it's evil they included "dynamic images" and how many images they allow on a single note and stupid stuff. The only problem I see is that FB should rate limit their hits to a single domain. I mean, really, that's scraping 101.
If as a domain owner you're afraid of an attack like this, then just limit the connections allowed per ip. Heck, drop it down to 2. 2 * 112 = 224. Certainly any server these days should be able to handle 224 requests for a file, right? Especially if you also limit the bandwidth per connection.
I'm not a pro but here's my input:
Because a single public IP for multiple services means NAT, your only "native" (as in "included in the network stack") way to make multiple domains available is to use separate ports.
Now what you want is to dissociate traffic on the Application layer, i.e. look at the "Host" header on an HTTP request on port 80, and redirect it accordingly. That means some piece of software that properly handles HTTP requests, and sits in front of your 2 VM's to decide which one to forward a request to. In your case this results in a 3rd virtual machine running only a proxy. The DNS entries for both sites should point to that 3rd virtual machine.
Which means either an HTTP proxy (Varnish is a good example, and also implements excellent caching), or a web server in proxy mode (e.g. Apache with mod_proxy or nginx).
php_value
& php_flag
are generally used for ini settings.
Use SetEnv
: http://httpd.apache.org/docs/2.2/mod/mod_env.html.
The variable is now exposed in the $_SERVER superglobal:
e.g.:
SetEnv TEST_VAR 123
*.php
$_SERVER['TEST_VAR']; //123
The explanation doesn't make a lot of sense, because if you are using a Tor browser, your (client's) public IP isn't visible either. So even if somehow the captcha app had a bug that caused it to send packets via a real interface instead of via Tor, it wouldn't know the IP of the FBI machines to send the packets to.
I think the more likely explanation is the one toward the bottom of the article; that some error caused the server to put up a 404 or 403 error page or similar, which included the site's real IP address (like this).
This is directly in the docs (for apache 2.2) on how to configure virtual hosts.
>Although addr can be hostname it is recommended that you always use an IP address and a port, e.g. >NameVirtualHost 111.22.33.44:80
It's also a deprecated directive in apache 2.4
Seems handy. You might want to include a bit for people running their own servers reminding them to put this in the apache config and disable .htaccess
See mod_status. It is usually IP address restricted.
What I'd recommend doing is running Apache on the same box, with mod_proxy_ajp. Apache would then front-end the connection on 80/443, even terminate SSL connections there, and communicate with the Tomcat instance over the AJP protocol. Here's a decent writeup: http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html
...and here's a more real-world config example. http://www.zeitoun.net/articles/configure-mod_proxy_ajp-with-tomcat/start
Good luck!
I would:
Or if you can easily recreate the stack on a fresh server, that might be better, since you won't experience any downtime. I would make a snapshot of it.
Then:
Then:
*Use Apache Bench and/or Loader.io to do performance testing. You could also use New Relic for server monitoring.
Creating these tests servers shouldn't cost more than a few dollars if you destroy them after you run your tests.
This is a 5 years old apache bug, exploitable only on outdated apache hosts (apache 1.3) owned by bad servAdmins who use the
<Limit GET POST>
require valid-user
</Limit>
directive to disable dir access.
You're lame, it's referenced in each and everyscript-shitties contest out there.
In fact it's not even a bug
http://httpd.apache.org/docs/2.2/mod/core.html#limit
You could try using the vhost_alias module to map incoming hostnames to directories, e.g.
VirtualDocumentRoot /var/www/vhosts/%0/public_html
...then use symlinks to point all the additional hostnames to the "real" document root. That way, when you get a request to add a new domain, you just SSH into the machine and create the new symlink, e.g.
ln -s /var/www/vhosts/www.existingdomain.com /var/www/vhosts/www.newdomain.com
or create a tool that lets you manage the symlinks from a web-based dashboard. Regardless, no Apache config/restart necessary.
(edit:fixed typo)
Works with my local apache install. Anywho, they're not HTML includes, they're SSI. http://httpd.apache.org/docs/2.0/howto/ssi.html
You can do more with them, too, like set and call variables and run if/else statements.
In general this sort of 'trickery' - also commonly referred to as friendly URLs - is usually achieved through Apache's rewrite module or similar on other server software.
Essentially it maps incoming requests to resources, without hindering the end-user with the mapping process. Whether /website.php/Joe is more accessible than /website/Joe leaves room for debate.
As for your example: create a .htaccess file - right next to your website.php - with the following:
RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/website/(.+)$ ./website.php?username=$1 [QSA,L]
Now when one visits /website/anything, it will internally re-route the request to website.php?username=anything.
Note that there is no straightforward way of letting Apache's rewrite module know which username is associated with which ID. So instead of using /website/Joe, /website/29 would make more sense.
For more information about URL rewriting see Apache's Rewriting Guide
If anyone can browse any given directory in your application, this is probably something you should fix first. If you're using shared hosting look into: http://httpd.apache.org/docs/current/howto/htaccess.html
When that's fixed, you can have a look at phpdotenv (https://github.com/vlucas/phpdotenv) that stores configurations in a ".env"-file and loads it into the $_SERVER global.
Edit: If you're going to put passwords or other secrets in the $_SERVER global, it's probably a good idea to encrypt the secret/password with a symmetric key before inserting it in the .env file. You can decrypt it in PHP by storing the symmetric key in a php-file. The reason for this approach is if you accidentally deploy a php script containing a call to phpinfo() it will output your $_SERVER globals among other things.
For servers like apache - sure they start as root, but don't they then setuid to the apache user -
http://httpd.apache.org/docs/current/misc/security_tips.html
Wouldn't this theoretically limit the scope of memory they can traverse with this bug, only to memory that the apache user can access?
Could just be that your webserver doesn't allow following symlinks. For apache, see this manual page.
Alternatively, check out this article for a quick and dirty way to serve files over HTTP. (TL;DR: python -m SimpleHTTPServer 8080)
Because in order to be served content, that content needs to be reachable at the specific URL you're visiting/requesting.
It works with tinypic, because they've configured their server to return that gif image as long as an otherwise valid URL ends in an image format extension tinypic considers valid. http://i34.tinypic.com/1zywhs.bmp works, as do jpg, jpeg, png, tiff, xcf, flv, wav, doc, txt, bin and probably more. dcsssucksabagofdicks however does not work, because they didn't anticipate anyone to ever use that extension, and it's not on their list of allowed/supported extensions. You can make almost any content reachable at almost any address that you control (with few limitations). If you configured your server to support www.mywebsite.com/pics/goatse.{any number of extensions you like}, then it would work for you as well. It's also possible to tell a server to allow a catch-all element in URLs, which would mean that even www.mywebsite.com/pics/goatse.{made-up extension that you didn't specify beforehand} could be made to work. But you have to configure your server to allow that. (Tinypic have chosen not to allow completely arbitrary extensions.) If you use Apache as your webserver, you can do a lot of that stuff with mod_rewrite. Other HTTPD programs may have similar features.
It does, btw. happen that incorrectly configured servers supply the wrong MIME type, and then things may or may not work, depending on whether the user's browser considers extensions as a substitute for MIME types.
> If you were to combine all of the ones served from websiteninja.pro into a single gzipped file, it would load way faster.
Or enable output compression based on MIME type (like, with mod_deflate). Then you don't have to worry about remembering to zip everything, don't have to unzip stuff to make changes, etc.
Honestly, anyone I hired to do web development should know all this.
If this is an apache server, you might use a .htaccess file to block or redirect requests where the http referer doesn't match your domain. There's a lot of different ways to write it.
Here's some blocking and redirect examples from the Apache documentation, and another blocking example.
Well, there was a perplexing incident involving an open-source web server named Cherokee, which came under fire for the terrible sins of being named after a Native American tribe (though that other web server is fine), and for having images like this as logos. Because, you see, them having feathers in their headband and something about infantilizing depictions of Natives.
> Інтернет-представництва (сайти та сторінки в соцмережах) засобів масової інформації (а також і органів влади, комунальних підприємств, установ і організацій, а також приватних підприємств, які реалізують товари й послуги в Україні) повинні виконуватися державною мовою.
Вот это они очень зря. Шли бы они нахуй из нашего интернета.
> Поряд з українською версією можуть також існувати версії іншими мовами. При цьому українська версія має завантажуватися за замовчуванням та містити не менше за обсягом та змістом інформації, ніж іншомовні версії.
А это вообще бред. Для многоязычных сайтов применяют content negotiation и, если это действительно многоязычный сайт, языком по умолчанию обычно ставят английский.
Just add a redirect in the HTTP virtual host like this:
<VirtualHost *:80> ServerName www.example.com Redirect / https://www.example.com/ </VirtualHost >
<VirtualHost *:443> ServerName www.example.com # ... SSL configuration goes here </VirtualHost >
This should redirect to the HTTPS virtual host (so you don't have an infinite loop).
These steps worked for me on a Centos server with Apache. You'll need to be the root user for all these steps.
Don't use .htaccess if you can avoid it. If you have access to do so, it is better to edit the apache config at /etc/httpd/conf/httpd.conf
and add these lines to "Section 2", the main server configuration (make a backup of the file first, in case you mess it up!):
<IfModule headers_module> header set Clacks-Overhead "GNU Terry Pratchett" </IfModule>
I know many people are using X-Clacks-Overhead, but the X- prefix is deprecated.
Check that the mod-headers module is enabled. Run this command:
grep "LoadModule *headers" /etc/httpd/conf/httpd.conf
If the above grep
command gives you output like LoadModule headers_module modules/mod_headers.so
you should be good to go, but if there is no output at all, you will need to install the mod_headers
module and add the above line to the apache config. I haven't tried it, but yum install mod-headers
might work. If the line is commented out with a # at the start, use your favourite editor to uncomment the line.
Restart apache by running: service httpd restart
.
Test that it all works. Running curl -I www.yourdomain.com
should print out a bunch of lines. If you see Clacks-Overhead: GNU Terry Pratchett then everything is working.
Unfortunately, Apache stopped distributing binaries a while ago. The easiest method I found is to go here: http://httpd.apache.org/docs/2.4/platform/windows.html click on Apache hause: http://www.apachehaus.com/cgi-bin/download.plx
Download the VC11 version
Read the INSTALL.txt file on how to install it (it's not a MSI, you have to do it manually to install httpd as a service).
If you don't need to vary the temperature readings, then ab is a good choice.
ab -c 10000 http://<your-site>
Apache Benchmark is installed with Apache usually, but you can also get it as a standalone.
<iframe>
s are still occasionally used, typically to embed external content (for example, to embed a YouTube video inside a page).
History lesson!
Actual frames are long, long extinct (I haven't seen one in the wild on a new website in probably 10 years, though I'm sure someone will come up with counterexamples). They were useful back when we wanted every page to have a copy of a site's navigation (aka every website on the internet these days) but the ability to dynamically render a page wasn't ubiquitous the way say PHP is now.
Frames started dying out as soon as even cheap (or free) web hosts started supporting first Perl and SSI, and then PHP. You could now easily have a header.php
and footer.php
and render it on every page load, which had big advantages over frames:
There is actually one place frames are still seen these days, which is when you use a "domain masking" on a web host, to mask the true location of a website by putting it inside a 100% width and height frame.
At my company, we use a framework in the build process which concatenates JS files but does not minimize, because:
debugger;
statement. This is very annoying.If transfer bandwidth is your main concern, why not look at mod_deflate?
> > Despite the tons of examples and docs, mod_rewrite is voodoo. Damned cool voodoo, but still voodoo. > > -- Brian Moore > >
That's from apache's official documentation on mod_rewrite
.
> And yet you still provide no substance to the topic or any alternative answers.
I already left a comment in reply to the OP.
> Why don't you prove my answers wrong?
It's common knowledge. I was trying to give you a nudge in the right direction, not win an argument. But if you insist:
First off: .htaccess
: Here is the Apache documentation describing how it's a misconception that <code>.htaccess</code> needs to be used for URL rewriting and that you should avoid using it whenever possible.
Next, mod_rewrite: you're starting with the assumption that you have a "real" URL that corresponds to a particular script, and that you must rewrite the URLs the user sees into the "real" URLs. This is just not how most systems work. It happens to be how PHP works because it dispatches using the filesystem. But that's just a quirk of PHP. It doesn't mean that it's the norm. For instance, if you use a Python framework, the request gets passed to the WSGI handler, which then dispatches the request however you'd like. No URL rewriting involved.
Next, PHP routers. Don't tell a newbie to look into PHP routers unless you've established that they are using PHP. Just explain the concept. Otherwise you're just going to confuse them. The only system mentioned so far is how Reddit works. Reddit doesn't use a PHP router.
Finally: Reddit itself. Reddit isn't written in PHP, it doesn't use a PHP router, and it doesn't use URL rewriting. Its source code is available here if you don't believe me.
> Because the upvotes don't agree with you.
A grand total of two people besides yourself gave you up votes. It's hardly proof you are right. If you decide what is technically correct based on Reddit up votes, you're going to end up with a lot of confused ideas.
Add
ProxyRequests Off
To your Apache config. See this page.
mod_proxy can act as a reverse proxy just fine with that setting. Unless you're intentionally setting up a forward proxy with Apache, you should never turn ProxyRequests on.
If I'm not mistaken, you'd have to use something like mod_rewrite to "rewrite" the URLs without .html to something with it.
Depending on what framework you're using, you could configure your routes to answer certain requests in certain URLs. But configuration of that is totally dependent on framework (although all they look similar).
Or the silly way: move each page to a directory with the page of the name and change them to index.html (e.g. page.html -> page/index.html).
Same here, apache has a 300 sec timeout, but I guess it is going with something else for telnet sessions.
More googling for me, no dice so far.
Edit : I fucking hate linux forums btw - http://www.webhostingtalk.com/archive/index.php/t-36898.html
> LOL... just keep using telnet and sooner or later, someone will hack you and then you'll have a really big time out... > > > SSH SSH SSH SSH SSH...
Does it not occur to idiots that people might be trying to do something else than use telnet in place of ssh???
Sigh.
Edit 2 :
Pretty sure this is the culprit, for me anyhow - http://httpd.apache.org/docs/2.3/mod/mod_reqtimeout.html
Check out /etc/apache2/mods-enabled/reqtimeout.conf if you're using that module.
You configure your webserver to do this.
Apache: http://httpd.apache.org/docs/current/mod/mod_deflate.html
Nginx: http://nginx.org/en/docs/http/ngx_http_gzip_module.html
IIS: https://msdn.microsoft.com/en-us/library/ms690689(v=vs.90).aspx
Because your web server software is configured to serve the files from a certain directory, and this is set to "public" in your case.
E.g. for Apache you use the "DocumentRoot" configuration: http://httpd.apache.org/docs/2.4/mod/core.html#documentroot
What you're looking for is a reverse proxy.
http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
I haven't touched Apache in forever, but if you were using nginx I could be a little more helpful.
For web load balancing, you need to set up a reverse proxy. The reverse proxy will direct web requests to back-end nodes that will respond to clients directly. The balancing method and node weighting are configurable, and there's no need to make each back-end web node HA/fault-tolerant. Build the reverse proxy machine to be HA and you'll have your bullet-proof load-balanced web cluster.
Since we use VMWare here, I just have VMWare handle the fault-tolerance of the reverse proxy host. That way, there's no need to mess with Linux-level fencing, state syncing, etc.
A couple of other, possibly simpler options:
Static site generator is another good option but may be more complex. I am a software engineer and security professional, so I'm recommending against PHP or another scripting language (or WordPress, which is notoriously insecure) because if you don't actually need them, they introduce a lot of other problems and risks.
> Will the server just have as much space as I have on the SD Card?
Yes, it can! Providing your root partition takes up all of the space on your SD card, besides for the small area of the /boot partition, it will be able to use all of the space on the SD card.
> And how can I enable the server to use a usb hard drive for storage?
This depends on whether you would like to use apache, nginx, or another web server, such as lighthttpd. Personally, I would recommend nginx, as it's very light on resources. If you where to want to change your web server root directory for nginx, and you had a USB hard drive mounted to say, /myusbdrive, you would change the root field in the nginx configuration file(or your virtual host configuration file) to something like this:
root /myusbdrive;
This would then tell nginx to serve out files placed in /myusbdrive. This is simplified, but you could modify this field to serve out any directory you would like. More information for nginx may be found here.
For apache(again, I highly recommend nginx, apache is much more resource heavy, and only use it for cases where apache has a feature that you need, that nginx does not support or have), you would modify the DocumentRoot field as such:
DocumentRoot "/myusbdrive"
This would tell apache to serve out files located in /myusbdrive. More information for apache may be found here.
You already have apache running so just use that with a third vhost on 9090:
<Proxy balancer://mycluster> BalancerMember http://127.0.0.1:8081 BalancerMember http://127.0.0.1:8082 </Proxy> ProxyPass /test balancer://mycluster
derived from http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html, which has great examples.
Make sure it's up to date. Every extra bit of software increases the risk of having a security problem on a system. Apache is targeted a lot, but also it's actively maintained.
See here, for example (had customer having this): https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0226
... or here for lists: http://httpd.apache.org/security_report.html
You need to run an HTTP daemon on your home computer to listen for connections and serve up the files, then you need to point your website's DNS entries at your home computer's IP address. This is not advisable, for a lot of reasons, by the way.
Apache is the defacto HTTP daemon, though there are tons.
I haven't read through the code, but very few of these are actually errors which are specific to Unsafe languages (which are only getting more secure, especially with Intel MPX being released).. Only one of these vulnerabilities is recent
If you check Apache also, their record is far from squeeky clean http://httpd.apache.org/security/vulnerabilities_24.html
Finally, this project only has 58 commits, and very few users. At this time it is certainly interesting, and the developers shouldn't give up, but, its way too earlier to even suggest it as a replacement for Nginx.
I'd also say its a significant longshot to call Nginx "a horrible security nightmare". Wordpress, Tumblr, and Instagram apparently use it, and, if it truly was, they wouldn't..
If your willing to use sub-domains, you can use the apache2 virtual host files and get it working. (http://httpd.apache.org/docs/2.0/vhosts/examples.html)
Failing that you could have rewrite/redirect to direct from that location into another virtual site. (http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html)
If you're using an Apache front-end, mod_rewrite can save the day for you. You can redirect old broken links to valid links, and have a catch-all rule for all other urls to a standardized 404 page you create. You can also grep the logs to find out which links people are accessing that result in a 404 to help you write the rules.
Apache has something like this in the form of mod_speling for over a decade. It is included in the standard distribution but I've never heard of anyone actually using it.
It doesn't sound like you need DNS advice, really. I think you have this confused with Virtual Hosts. DNS is what points the domain to your server's IP address. From there, Apache determines which site to serve based on the host name used. This is a good example for your situation. I generally just add a new file in /etc/httpd/conf.d/ called vhosts.conf and add all my VirtualHosts there.
Regarding myth #5 - Each https site does NOT need it's own ip address (caveat follows).
There's an extension to SSL / TLS called Server Name Indicator which allows name-based virtual hosts to be used.
Not sure about other servers, but Apache >= 2.2.12 supports it. Oddly enough, even though IE7 supports it, I don't think IIS does.
The only catch is that some older browsers (cough IE6 cough) don't support the extension, so they'll end up getting the default ssl virtual host even if they requested something else. With IE6 dying though... IE7+, FF2+, Opera 8+, Chrome, all support it.
You are understanding it right. You can redirect things based on whatever you want with mod_rewrite . Unlike the first comment indicating it is used just for removing extensions. It is used a lot based on your need. What you are seeing likely is that it is on in the headers. With Apache you make the redirects in the .htaccess file, you can read and learn more about it and how to use it here http://httpd.apache.org/docs/current/mod/mod_rewrite.html
It is quite extensive and takes a bit to master but it is very useful!
What are you trying to achieve? A robots.txt shouldn't be used to try and prevent anyone from seeing your site. They only apply to bots, and only bots written to properly comply with Robots Exclusion Protocol. Then it will see this file, read it, and act appropriately. A robots.txt file gives you 0 security because it doesn't apply to browsers.
If you're attempting to block actual traffic to a site, then you want to use .htaccess files.
Yep. That happened in httpd 2.2, released in 2005. The split required changes to your config, so not knowing what those mean tends to be an indication you weren't doing much Apache httpd administration before that. :-)
HostnameLookups hasn't been default in apache for a long time, so no, not "generally".
It's very slow and unnecessary for most sites to do at the time, plus it has a noticeable timeout. If you enabled it on a public server you would get a huge number of complaints.
Not all IP addresses even have a reverse DNS entry. If it's needed for logging, it is often appended in the background.
> I wrote the website in notepad using standard very simple html
That's awesome, but sounds like it's time to level up. You can't do this with just HTML, except for a couple of cases...
Those would probably be the simplest solutions. Next simplest would be to see if your site supports PHP, and if so, make all of your pages PHP pages, and replace the banner/menu code in each of them with <?php include('banner.php'); ?>
, and put the banner contents in a file called "banner.php"
If you want to see if your server supports PHP, create a "test.php" file and put this in it <?php echo 'i know kung foo'; ?>
and load test.php in a browser. If it the page just says "i know kung foo" then your server supports PHP.
Firstly, you should read how DNS works and what hosts file does. In a nutshell, all it does is it tells where a domain should point, but it cannot specify where domain AND directory should point. If you want the Apache Virtualhosts to be automatically populated without having to define Virtualhost for each directory you can use Dynamic Virtualhosts. Based on the ServerName variable you can specify where the Apache should look for the webpage.
See
Though, to access each of these folders you will still have to add the directory names to hosts file. For example, you have directories
You have to configure apache to use
UseCanonicalName Off VirtualDocumentRoot /var/www/%0/
And after this you have to add the domains to your hosts file.
If this is something you have to change a lot you can try writing a script which generate hosts file based on web folders or if you consider yourself an advanced user and you're using dnsmasq as local DNS forwarder you can specify that all requests to *.dev point to 127.0.0.1 by adding
address=/dev/127.0.0.1 to your dnsmasq configuration file.
After this you should name your folders
and edit Apache Virtualhost configuration as
UseCanonicalName Off VirtualDocumentRoot /var/www/%1/
Now you will be able to access all folders by going to $foldername.dev
To see more information about %0 and %1 see mod_vhost_alias manual page which I linked above.
lookup virtual hosts
I usually use virtual min to give me a little control panel to play with and does it in one shot.. but if you just want a simple site you can do it with just editing the conf files in apache.
Probabilmente hai Apache (se sei su linux) col http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestbody limitato. Anche se trovi un modo di cambiar valore (non conosco Aruba) e' parecchio probabile che hanno sempre controlli esterni che limitano queste cose.
Sorry I'm a little rusty on Apache since I've been working with nginx lately but I think I can push you in the right direction.
Where you want to reference these variables from is the Enviroment Variables http://httpd.apache.org/docs/2.2/mod/mod_env.html#setenv
Then to access them in PHP http://php.net/manual/en/function.getenv.php
If you're using Symphony2, codeigniter, or some other frameworks they may have other ways to access the environment variables.
This is the standard for variables that depend on the environment the program is run on. Hopefully that can unblock you :)
The only sensible options for this are to:
a) set the credentials as environment variables for each environment, which is simple to do without a tool (set them in the Apache config, and grab them like this, for example), or more conveniently via a tool like dotenv.
b) and/or git ignore your CMS/Frameworks's settings file, and create it on each environment, outside the web root - for example, you could keep it in /etc/yourapp/settings.php
on each server. Then after each deploy, symlink it from that 'outside' location to where ever it's supposed to be normally - ln -s /etc/yourapp/settings.php /var/www/yourapp/settings.php
.
Any solution which involves committing your production credentials into version control is a bad solution - nobody can stop you doing it, but know that it's bad. And if you wanted to do something bad, then why even ask for suggestions...
Well the first thing is to define what you are referring to with "a single server". Normally I would think a server completely in your own control (either a VPS or full dedicated), however you then talk about increasing your costs of hosting in reference to having multiple accounts on the server.
If you are talking about your own server, then the key question is how do you have (assuming) apache running? There are generally two main methods:
For the first method, this has the advantage of that you need to specify which files/directories are writeable by a script running on your site. However in this setup, usually ANY site can modify ANYTHING set writeable even if it is on a different account.
For the second method, a script running on the site can write to any file directory on the account, as it is running as the same user who owns them. However, this means that siteA cannot write to anything in siteB (by default) This is the situation where separating the sites into different users makes a difference. If siteA gets hacked, it is ONLY siteA affected. Hosting accounts on servers running cPanel are usually set up this way. My main hosting server I use this.
I also run a server with VitualMin that also does this, so I don't have to pay the $15/month license fee for cPanel on a VPS
If you want to just manually add it in to an existing environment, then take a look at suEXEC which is what sets apache to behave this way (scripts run as their owner). I have never manually set this up myself, so can't offer much more than the link.
Once you have that setting in place, then it is just a matter of creating a new user for each site.
It might be worth your time learning even basic PHP (assuming you don't know it yet).
The way you described it, you can implement your solution using an Array instead of JSON, and let PHP generate the necessary HTML. That will save your pages the extra request that they have to make for the JSON file and the client-side processing to get the contents of the JSON.
You can also look into Server Side Includes if you're using Apache or whatever is appropriate for the web server you're using.
The URL Rewriting Guide has some examples under "Time-Dependent Rewriting:"
RewriteEngine on RewriteCond %{TIME_HOUR}%{TIME_MIN} >0700 RewriteCond %{TIME_HOUR}%{TIME_MIN} <1900 RewriteRule ^foo.html$ foo.day.html RewriteRule ^foo.html$ foo.night.html
So I'm guessing you want something like this
RewriteEngine on RewriteCond %{TIME_MON} -eq 4 RewriteCond %{TIME_DAY} -eq 1 RewriteRule ^/$ april-fools.html
You'd be able to host the game on your PC if you set up a local server - take a look at http://httpd.apache.org/ . You could then use something like http://www.noip.com/ to get a friendly address which would always point to your IP.
Note that I have never set up a multiplayer Javascript game, so others might have better suggestions on how to do what you want :)
Well that is really a vague question.
that would just be a start
The Apache web server has an option called MultiViews which you can set for a directory. When it's active, the web server can pick one of several fitting files to be served (for example, "index.html.en", "index.html.fr", "index.html.de" etc. depending on the user's language preference when "index.html" is requested).
However, this is a common pitfall with other things like script files and URL redirections which makes the web server try something you didn't intend first.
Listen needs to go in httpd.conf or in a "listen.conf" file or there may already be a "ports.conf" file depending on your install. It can't go in a virtualhost for example as the context isn't right. Listen is only to bind apache to the IP and port, not for site / virutalhosts. The file can be any where but it would be best to have it in the conf folder or in conf.d .
To tell apache which IP and port a site responds on, you use the virtualhost directive: <virtualhost 10.1.0.44:80>....</virtualhost> . That will tell apache "hey example.com should be coming to you at 10.1.0.44 on port 80, watch out for it okay?"
It sounds like this is a setup similar to amazon's where the servers have private IPs only. From my experience, using cpanel in this way is more hassle then its worth and using something like ispconfig would be better.
> (how do I host both request tracker and dokuwiki on port 80?)
You're using Apache with a document root full of php scripts, right?
If you wanted to host the issue tracker on a subdomain (e.g. issues.civcraft.co), you could use a VirtualHost like this:
<VirtualHost *> ServerName issues.civcraft.co DocumentRoot /path/to/directory/full/of/nasty/php/scripts/for/issue/tracker/ </VirtualHost>
But if you want a path like civcraft.co/issues
, you could try using Alias:
Alias /issues /path/to/directory/full/of/nasty/php/scripts/for/issue/tracker/
(all of that would be on top of your existing config)
I haven't used Apache in a while, but I hope this helps a little.
And, just to clarify - a lot of times when people say Apache they are actually referring to the Apache web server software and not the foundation itself (maybe it was their original product?). Think of tomcat as the same thing but specific to Java servlets and JSPs.
I got curious and did some digging of my own, leading to the source (search for <head>, see "d->head_insert" a bit further down)
Adding the following to my own config got the meta tag into my directory listings:
IndexHeadInsert "<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\" />"
although in hindsight it also seems to be mentioned in the documentation
Hey, normally I'm not opposed to Debian-based shit. But when they take a 3rd party application, and do this to it, they're just plain wrong.
> The Debian maintainers have a peculiar way of arranging the configuration files for Apache 2.0 which is not documented in the standard Apache documentation. [...] Debian stores its Apache 2.0 configuration files in the directory /etc/apache2. Normally the main Apache configuration file is called httpd.conf. Although that file exists on Debian, it is only there for compatibility with other software that expects it to exist. The real configuration starts with the file apache2.conf. You can still add configuration statements to httpd.conf, as apache2.conf includes it, but you would do well to ignore that fact. [...] Debian adds another configuration file, ports.conf, which contains the Listen directives
That's incorrect. When the apache foundation provides thousands of pages of well organized, very useful documentation, and Debian decides to do it a different way, it's undermining this valuable resource.
Are you getting a "denied by server configuration" in your error.log?
Try this:
>It is fairly common — and recommended in the documentation (http://httpd.apache.org/docs/2.2/misc/security_tips.html#protectserverfiles) — to configure Apache to deny all access, by default, outside of the DocumentRoot directory. Thus, you must override this for the directory in question, with a configuration block as shown below:
<Directory "/path/to/other/directory"> Order allow,deny Allow from all </Directory>
>This permits access to the specified directory.
Yes, that change in the htaccess is what is preventing you seeing the login page.
I'm not sure what the QSA does but you can read about it here.
http://httpd.apache.org/docs/current/rewrite/flags.html
There should be no lasting repercussions from just making a backup of the file, and deleting it and seeing if it restores access to the login page.
What do your webserver's error and access logs say?
Assuming that you're using Apache, you could set up a server-status page which'll tell you if there are any requests that have been sitting spinning the CPU for minutes:
If you need download only, go SFTP.
If you need upload as well, consider WebDAV. Technically, its what SharePoint makes use of for the "Explorer View"; but, you can standup a simple WebDAV server with Apache.
> You mean Allow all doesn't work anymore?
Nope. Check the 2.2 -> 2.4 upgrade guide
2.2 configuration: Order allow,deny Allow from all
2.4 configuration: Require all granted
You generally configure Apache httpd's logging on a per-VHost basis, so if the attacker uses a hostname that resolves to a valid (i. e. not the default/catchall) VHost on your machine, you should be able to tell that part of the URI by which log the requests get logged into. POST data/HTTP headers don't get logged (but there's optional modules that allow for it). This could be expensive in terms of I/O and storage requirements, depending on the amount of request data passing through.
You can "ban" individual clients on the HTTP layer by using Apache http's 'Order', 'Deny from', and 'Allow from' stanzas. Check http://httpd.apache.org/docs/2.2/howto/access.html to learn how (and do note that this document applies to version 2.2 of httpd only; earlier and later versions have updated/changed documentation).
You could also block the client's source IP address at the packet filtering level of the Linux kernel, using iptables. Minimal example:
iptables -A INPUT -s <CLIENT-IP_ADDR> -p tcp --dport <APACHE-TCP_PORT> -j REJECT
Edit: typo fixed
Apace Bench to simulate load - http://httpd.apache.org/docs/2.2/programs/ab.html
Devstack with HEAT for autoscale- http://openstack.prov12n.com/autoscaling-with-heat-on-devstack/
Play around with that, also -
HAproxy for load balancing - http://en.wikipedia.org/wiki/HAProxy
Simulate load on the website, get them to autoscale. You can do this a million ways, even with HAproxy calling AWS, or potentially calling Devstack/HEAT to autoscale there.
I'm just using an online tester, so mileage may vary, but from the docs "mod_rewrite has to unescape URLs before mapping them" So it seems like your /genre/drum+%26+bass becomes
index.php?genre=drum & bass
which sets genre to 'drum ' and then has another paramater called ' bass'
The B flag sounded like it might do what you wanted, but it didn't work in the tester, so I can't say for sure.
The third option just programmatically does the same as the second; and the 1st puts it in the HTML itself, which is nice when viewing in a non-webserver content (e.g. file:///file.html
in the browser). I prefer doing #1 and #2. Number 3 is fixing it technically, but not in the place it should be done IMO.
WordPress, Drupal, etc use mod rewrite.
Even tho you see www.example.com/this/that .. .it gets translated to www.example.com/index.php?q=this/that (or something similar).
You don't need an intermediate device to accomplish this. Pretty much any web server can do name-based virtual hosting. For just one example, here's the documentation for doing it with Apache 2.4.
If you want to use a WAF or a firewall or a load balancer or whatever, I'd suggest setting up your required configuration on a stand-alone web server and then adding the other device afterward, depending on your exact needs. As far as your basic question, it's very possible and in fact extremely common.
There are various ways this could be accomplished.
First, the subdomain. This is most likely being done with rewrites. This allows the server to take a URL in a particular format and send variables to a script.
For example http://sub.domain.com could actually be http://domain.com/index.php?site=sub. or if doing strictly files, the rewrite could load the contents from domain.com/sub.
The contents, like stated, could be copied into each subdomain directory (example 2 from the rewrite). More likely, they are being loaded from a database. The domain has a rewrite for sub.domain.com, and passes the sub part to index.php (index.php?site=sub). index.php then queries a database for the content associated to sub, and displays it.
I hope this clears things up a little bit. Let me know if you need any clarification or have other questions.
As has already been said, adding VirtualHosts settings to your Apache config is all you need. You can point the multiple domains to that same IP and Apache will sort it out based on the header information. If you think you'll have a fair amount of utilization I'd recommend moving MySQL to a second instance. This increases the security a bit since the IP of your DB server will no longer be public and it also allows each application to utilize more resources. Apache and MySQL will be happier not fighting each other for RAM. This is even more the case if you're using t1.micro instances as their CPU utilization is meant to be burst and not consistent.
Read about the VirtualHost directive here: http://httpd.apache.org/docs/2.0/vhosts/
Another advantage to separating off MySQL is having better flexibility with backups, patching, and automation. For example; I use AutoScaling and ELB in my implementation to maintain a front-end set that is basically replicable on a whim and is self-healing in the event of a crash. I've been getting 99.98%-100% uptime (computed monthly) from my site and that's leaving the monitoring on during patch cycles.
Apache has one built in, mod_autoindex. No need to use any PHP app if all you're doing is making contents of a folder available for download locally.
It is, but it only works with hostnames. For instance, if you wanted people to be able to get to your site by either typing "example.com" or "www.example.com", you would set one of those as the ServerName and the other as the ServerAlias.
You'll need to do some mod_rewrite magic to make a folder on one site point to a different virtual host. Perhaps something like this will work.
1000 is a suspiciously round number. Are you using/have you looked at your MaxRequestsPerChild directive?
It "sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die", which I would imagine costs time to restart child processes after death... and the default is exactly 1000.
Coincidence?
Before and after I installed APC I used the Apache benchmarking tool ab to check whether it was actually working. This should definitely show you improvements in the number of handled requests.
http://httpd.apache.org/docs/current/mod/mod_rewrite.html
Because the per-directory rewriting comes late in the process, the rewritten request has to be re-injected into the Apache kernel, as if it were a new request. (See mod_rewrite technical details.)
I'm assuming www.example.com is the same vhost of the original request and that you're trying to force people to use ssl. Since this rule is not configured exclusively for non-ssl connections, you enter an endless loop. I'm sorry I can't help you further right now, I need to leave and I have never needed to do such a thing, but try to see if you can have some conditional rule that matches only in non-ssl connections
It's hard to say for sure without seeing the full .htaccess, but my insufficiently-caffeinated guess would be that you need to make the rule for the subdirectory conditional so that it only runs on requests for URLs that begin with that subdirectory. Otherwise it'll run against every request, which may be what's mucking up your images, stylesheets, etc. (because their URLs get rewritten too).
RewriteCond is how you fix that.
Looks like you got the problem sorted out, but I'm still gonna say this:
You aren't/weren't compiling anything. Your Apache web server was ignorant of the PHP language, and didn't know what to do with that funny file with the .php extension, so was either handing them to you as a download (Because of MIME types), or outputting the script as text that your browser displayed verbatim as you wrote it (because your browser and Apache both thought it was plain text, which you browser knows about).
Specific take aways from this: Apache and PHP are two separate "things". The PHP interpreter only knows how to do stuff with the code you give it (the coding part). It doesn't know how to transfer it to you. Apache only knows how to give you the things you ask for. the "LoadModule php_module lib_php5.so" line in your httpd.conf file tells Apache that you want Apache to take those .php files, hand them over to PHP to figure out what the code says to do, and then it hands the results of that back over to Apache, and Apache will spit that out over the network (yes, http://localhost/ is still over the network)
If you have access to the web server configuration files, you shouldn't be using .htaccess. All security should be implemented via the config files, if at all possible. If security is in .htaccess, it's quite possible for an attacker to get enough access to modify .htaccess, which he then tweaks to give him much more access. See here for more info. Also, based on that page, performance is also better if you use config files and not .htaccess