> Restart Apache in every 10 requests? :) Oh Lord.
You laugh, but Apache actually has first-class support for this feature: MaxRequestsPerChild.
That right there probably solves some 99% of all issues people have with node. But, you know, muh async and web scale, so no Apache.
Jekyll is a static cms, meaning once you did the html template, you simply add pages with some header configuration to generate a blog.
Biggest upside IMHO is that you can host your blog for free on github pages.
It's a free cms, that comes with free hosting so you only might need a domain if you don't want a something.github domain. And it's pretty simple to develop locally and push to github when ready.
No actual coding required, only yaml configuration files.
This is a setting in the web server. In Apache, for example, it's the DirectoryIndex directive: http://httpd.apache.org/docs/current/mod/mod_dir.html
By default, that is set to index.html. You could change it to whatever file you choose - index.php, script.cgi, or even bite.me.
It's useful to have a DirectoryIndex so that if someone doesn't explicitly choose a file (no one ever does when going to a web URL), it chooses the default file to display.
Btw, I own a Casio Denshi Jisho and most of the reference works are highly specialized, such as one just for medicines, or one just for agriculture. The novels feature is decent though. It's also loaded with some classical music. And even famous speeches; mine has one from Obama.
Port forwarding essentially brings out the device in question to the public network. Really only a routing trick.
Reverse proxying requires the client to actually connect to the proxy, which then connects to the resource in the internal network.
With reverse proxying you can inspect the incoming query before deciding on if it's going to be let through. Wonderful extra layer of security. Any denial of service attack will hit the proxy and not the application server. A proxy is simpler, and will withstand more traffic than exchange.
And you can modify the outgoing content as well. One favourite of mine is to use apache as a reverse proxy in front of a webapplication. The proxy server will deal with encrypting the traffic, lessening the load on the application server.
I've set up a clustered apache solution that proxies all content that is supposed to be published on the internet. A single place to block IP:s that attempt portscans and bruteforce attacks on the servers.
Finally, it's easy to keep links intact on the proxy, no matter wherever the internal resource is moved. Simply update the address in the proxy, and your end-users won't even notice that something moved. Those services that require actual downtime for updates can be pointed to a "service down"-page, where you can update your endusers on how the service break is going.
BTW. I recommend apache and mod_proxy over that Citrix thingy. Unless you have a support contract with Citrix, of course.
Heroku and Github have an integrated student pack, you can get a free tier of hosting on Heroku Using that, there are lots of articles about how to use different databases with Heroku and deploy your front and back end there (my group used this for 362).
That should hopefully be a good starting place. Also, if you haven't already, sign up for the full github student pack.
Hey dude, if you want to run your code on your desktop/laptop you can install XAMPP. It's available for pretty much all the major operating systems and includes apache, mysql and php.
http://www.apachefriends.org/en/xampp.html
Hope that helps.
Nothing except that there's some bug somewhere within Alienblue or the website you were looking at.
"It's working!" is the default Apache page when it's first installed. Seems likely that you clicked on a link to some site that had a new/broken apache config file.
And now, as of version 2.3, NameVirtualHost is deprecated and doesn't do anything. Apache just configures name-based virtual hosting automagically based on your VirtualHost stanzas.
http://httpd.apache.org/docs/2.4/mod/core.html#NameVirtualHost
So you figured out how to configure it properly just in time to not need it anymore.
A hybrid thread-event MPM has been part of Apache for ages, and it's production quality in 2.4. See mpm_event for more details. Of course, it's still vastly outperformed by nginx in some scenarios, but it's good to know that it's there.
of course, as long as you know how to manage a linux server. :)
Or you can use tools like Server Pilot to offload the management of the server. Installing wordpress is three clicks away.
I wrote about this (using server pilot) on my blog: https://gagah.me/2017/02/14/setting-vps-for-wordpress-without-ounce-linux-knowledge/
Things I have so far (about 10 minutes work, and some more time on writeup)...
Easy suggestions:
Beyond all that, I presume I don't have to tell you that when live, you should keep the bulk of coins offline (in cold store), do I?
Expected: A tool that let's me generate fake apache access and error logs.
Found: Tool to create Apache LogFormat syntax
Still nice tool :-)
GitHub Pages only supports hosting static pages. Flask is used for making dynamic web applications, where Flask acts as the "back end" for the website. If you want to host a custom Flask application, I'd recommend Heroku, which can interact with your build environment much the way you would with GitHub Pages.
"Filled in pixel-by-pixel" is the generally-agreed upon definition of pixel art?
I find that problematic because for some images there's no way to tell the method by which they are generated.
For OP's picture, it's pretty likely that it's a screenshot. But who knows, they could have filled it in, pixel by pixel, from a reference screenshot.
Take this image as another example. I could have easily done that pixel-by-pixel or with the oval tool, as long as anti-aliasing was off. By your metric, the same output image could be classified as both pixel art and not pixel art, depending on which method you think I used.
> edit: Bonus points for a browser-based download and upload solution.
The solution you are looking for is Webdavs. It's Webdav + SSL.
It's a standard way to manage files over a browser. All major operating systems support mounting it as a harddrive or volume. You can use a variety of clients to access it and you can upload/download files via a browser.
In addition to this you can use it to augment a programmed web-based uploader/downloader application.
You will have to use Apache with mod_dav. http://httpd.apache.org/docs/2.4/mod/mod_dav.html
You can use any sort of password authentication or user management scheme that Apache supports. It's extremely flexible and powerful.
The major downside of it is that if you are using file-backed storage then all the files uploaded/downloaded must be read/writable by the apache process. So POSIX file system permissions like you can use for nfs or sftp won't be useful.
mmmmm why noboby has said Heroku? its free you can have your page the ONLY restriction its that you have a limit of 10k rows if you are going to use a database but outside from that you can use many things like Ruby on Rails and stuff like that Edit: changed columns to rows sorry my bad
You guys know it, Kiwix is an offline reader that allows users to browse entire copies of Wikipedia (ca. 83Gb for the whole thing, incl. images), StackExchange (new release coming up soon), the Gutenberg project, etc. stored locally. The project is fully FOSS, lives from user donations, and the subreddit is r/Kiwix.
The PR has been made last night so consider it fresh off the press. Feel free to test and report issues here. Thanks to the folks who made it possible.
Definitely this makes it easier. But the author of that article keeps harping on how it's evil they included "dynamic images" and how many images they allow on a single note and stupid stuff. The only problem I see is that FB should rate limit their hits to a single domain. I mean, really, that's scraping 101.
If as a domain owner you're afraid of an attack like this, then just limit the connections allowed per ip. Heck, drop it down to 2. 2 * 112 = 224. Certainly any server these days should be able to handle 224 requests for a file, right? Especially if you also limit the bandwidth per connection.
I'm not a pro but here's my input:
Because a single public IP for multiple services means NAT, your only "native" (as in "included in the network stack") way to make multiple domains available is to use separate ports.
Now what you want is to dissociate traffic on the Application layer, i.e. look at the "Host" header on an HTTP request on port 80, and redirect it accordingly. That means some piece of software that properly handles HTTP requests, and sits in front of your 2 VM's to decide which one to forward a request to. In your case this results in a 3rd virtual machine running only a proxy. The DNS entries for both sites should point to that 3rd virtual machine.
Which means either an HTTP proxy (Varnish is a good example, and also implements excellent caching), or a web server in proxy mode (e.g. Apache with mod_proxy or nginx).
Pretty sure what your after is Heroku.
https://www.heroku.com/pricing
Posted this for someone else a few days ago on another thread. Can I ask why you need nodejs? (Are you just learning it? Or need cheap db too?). Any Heroku has a free tier.
Seems like if you want free, something like GitHub pages would do the trick as well. Can host html, javascript, css etc. Without the hassle of setting up a server.
Good luck. Give me a shout if you need any other suggestions. I enjoy reading these so would like to help the community any way I can.
Heroku has a lot of rules, it's not surprising they are suspending accounts. I'm pretty sure they use AWS/ES2 so they are probably noticing a ton of traffic/requests coming from the bots, the scripts are kind of similar to web scraping/crypto mining in some way. https://www.heroku.com/policy/aup
Upper management/devs probably saw that something was up, had a quick discussion and realized most of the users are just scripting and they made the call to suspend. I'm surprised that so many people here don't want to or can't run their own computers 24/7? I thought steam users just left their stuff on all the time anyways?
You can probably find a cheap VPS for a couple of dollars a month if you really want to.
php_value
& php_flag
are generally used for ini settings.
Use SetEnv
: http://httpd.apache.org/docs/2.2/mod/mod_env.html.
The variable is now exposed in the $_SERVER superglobal:
e.g.:
SetEnv TEST_VAR 123
*.php
$_SERVER['TEST_VAR']; //123
They do inform you.
> "However, you understand that your use of the Service necessarily involves providing ServerPilot access to and the ability to modify the contents and operation of your servers"
Point 11 of their terms @ https://serverpilot.io/terms
> I was told this was needed to provide tech support. That's noble.
In all seriousness, as SP are spinning up your server and controlling it. How do you think they would provide support for it without access?
By using a managed host or pseudo-managed host (like SP) you are essentially handing over administration of your server to a third-party.
If you are not comfortable with a third party having access; don't rely on a third party to manage your servers and do it all yourself
With those requirements, I'd recommend Jekyll generated and hosted by GitHub Pages.
You just push your markdown to a repo and it'll update your site, they handle hosting and you can still use a custom domain if you have one.
Since this is a publicly-facing server I would aim for a distro that is current on security fixes (and has a history of being current), and is not "bleeding edge" (exposing yourself to new exploits or instabilities)... Debian seems like a good choice. The only non-Arch system (which serves as my mini-server for outside services) in my house runs Debian and I use webmin for managing it.
Edit: Don't use webmin
The explanation doesn't make a lot of sense, because if you are using a Tor browser, your (client's) public IP isn't visible either. So even if somehow the captcha app had a bug that caused it to send packets via a real interface instead of via Tor, it wouldn't know the IP of the FBI machines to send the packets to.
I think the more likely explanation is the one toward the bottom of the article; that some error caused the server to put up a 404 or 403 error page or similar, which included the site's real IP address (like this).
This is directly in the docs (for apache 2.2) on how to configure virtual hosts.
>Although addr can be hostname it is recommended that you always use an IP address and a port, e.g. >NameVirtualHost 111.22.33.44:80
It's also a deprecated directive in apache 2.4
You can download the HTML files but it's a pretty difficult way to navigate through it. You can download a file and open it through Kiwix which provides a pretty good way to navigate through it.
Digital Ocean and ServerPilot will make your static and WordPress site management process a tad easier. Upgraded plans give you free SSL certs, sftp users, logs, etc.
You can also manage multiple servers from one dashboard.
Then you can use standalone droplets on DO for your two-tier architecture sites.
Nice work. Do you know Jekyll? It is a similar CLI program that allows you to use an HTML template for all your static pages.
Maybe it could be cool make your GUI work with Jekyll as a backend.
I would say first priority is getting backups/DR squared away and after that getting the old machines virtualized.
Nagios is a good idea for monitoring, there are also other options out there should you go that route. I would not recommend webmin, as it's had some security issues.
Ultimately, if there are so many places where outdated hardware/software/whatever are going to cause problems for your company in terms of downtime, budget is the only thing that will fix this. I would try to put together some sort of documentation along the lines of $x for new servers/hardware/etc vs $y for downtime.
Seems handy. You might want to include a bit for people running their own servers reminding them to put this in the apache config and disable .htaccess
Awesome job Kiwix team! You are peak datahoarder material!
Your work has helped so many people access information, and allowed us datahoarders to archive these excellent resources in an accessible way.
For those of you that don't know what this project is: it is a way to archive and access offline archives of Wikipedia, Wiktionary, TED conferences, the Gutenberg library, Stack Exchange, and some others as well. Here's their full list
This is an awesome project, and if you have the ability, I highly recommend you use the stuff this project has worked on, tell your friends, and drop them a donation to support their hard work!
This right here. Also it's stupid simple to roll your own homeserver (synapse) and web client (element) using something like Yunohost.
Keep your shit in-house by being your own cloud provider.
See mod_status. It is usually IP address restricted.
What I'd recommend doing is running Apache on the same box, with mod_proxy_ajp. Apache would then front-end the connection on 80/443, even terminate SSL connections there, and communicate with the Tomcat instance over the AJP protocol. Here's a decent writeup: http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html
...and here's a more real-world config example. http://www.zeitoun.net/articles/configure-mod_proxy_ajp-with-tomcat/start
Good luck!
This is a 5 years old apache bug, exploitable only on outdated apache hosts (apache 1.3) owned by bad servAdmins who use the
<Limit GET POST>
require valid-user
</Limit>
directive to disable dir access.
You're lame, it's referenced in each and everyscript-shitties contest out there.
In fact it's not even a bug
http://httpd.apache.org/docs/2.2/mod/core.html#limit
You could try using the vhost_alias module to map incoming hostnames to directories, e.g.
VirtualDocumentRoot /var/www/vhosts/%0/public_html
...then use symlinks to point all the additional hostnames to the "real" document root. That way, when you get a request to add a new domain, you just SSH into the machine and create the new symlink, e.g.
ln -s /var/www/vhosts/www.existingdomain.com /var/www/vhosts/www.newdomain.com
or create a tool that lets you manage the symlinks from a web-based dashboard. Regardless, no Apache config/restart necessary.
(edit:fixed typo)
Works with my local apache install. Anywho, they're not HTML includes, they're SSI. http://httpd.apache.org/docs/2.0/howto/ssi.html
You can do more with them, too, like set and call variables and run if/else statements.
In general this sort of 'trickery' - also commonly referred to as friendly URLs - is usually achieved through Apache's rewrite module or similar on other server software.
Essentially it maps incoming requests to resources, without hindering the end-user with the mapping process. Whether /website.php/Joe is more accessible than /website/Joe leaves room for debate.
As for your example: create a .htaccess file - right next to your website.php - with the following:
RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/website/(.+)$ ./website.php?username=$1 [QSA,L]
Now when one visits /website/anything, it will internally re-route the request to website.php?username=anything.
Note that there is no straightforward way of letting Apache's rewrite module know which username is associated with which ID. So instead of using /website/Joe, /website/29 would make more sense.
For more information about URL rewriting see Apache's Rewriting Guide
Performance is great for me. I haven't used a pre-built image from their app directory. I get a blank Ubuntu install and use ServerPilot to setup the LEMP (+ Apache) stack. It's so fast that I host multiple sites on a $10 VPS without issue. ServerPilot is kinda like CPanel as you use it to set up your sites, databases, SSL certs (free w/ Lets Encrypt) and more. I can send you a code to get some free usage if you want to try it
Or you could do this with nginx.
A few lines of configuration and you're set. Beats developing and maintaining an application.
I replaced a PHP app at work with this 8 months ago, I did not have to look at it since.
Namecheap - .com = £7.08/yr, .co.uk = £6.05/yr with privacy
A lot of the big brand registrars have horrible reputations and predatory practices
eg. buying domains you search for and "helpfully suggesting" that they can negotiate to buy from the "owner" at a higher price
search reddit for horror stories about GoDaddy etc.
Jekyll + Netlify = £0
I do have a personal page (I don't want it associated with this account) that I created at the behest of my supervisor.
I don't use it a lot, and I'm pretty sure no one's ever contacted me because of it.
It was useful, though, to learn a bit of web tech. I used Jekyll to build it, and I think it turned out nice.
Despite the lack of a social aspect (following, friends, etc.) I ended up hosting my own blog/website powered by jekyll.
I chose it because I'm a big fan of writing with markdown syntax, and jekyll allows you to write posts and pages using a really nice markdown/html hybrid.
It's super quick to set up, and you can host on github.pages for free!
Fast, flexible and with loads of themes to choose from. I can't recommend it enough.
If anyone can browse any given directory in your application, this is probably something you should fix first. If you're using shared hosting look into: http://httpd.apache.org/docs/current/howto/htaccess.html
When that's fixed, you can have a look at phpdotenv (https://github.com/vlucas/phpdotenv) that stores configurations in a ".env"-file and loads it into the $_SERVER global.
Edit: If you're going to put passwords or other secrets in the $_SERVER global, it's probably a good idea to encrypt the secret/password with a symmetric key before inserting it in the .env file. You can decrypt it in PHP by storing the symmetric key in a php-file. The reason for this approach is if you accidentally deploy a php script containing a call to phpinfo() it will output your $_SERVER globals among other things.
For servers like apache - sure they start as root, but don't they then setuid to the apache user -
http://httpd.apache.org/docs/current/misc/security_tips.html
Wouldn't this theoretically limit the scope of memory they can traverse with this bug, only to memory that the apache user can access?
Could just be that your webserver doesn't allow following symlinks. For apache, see this manual page.
Alternatively, check out this article for a quick and dirty way to serve files over HTTP. (TL;DR: python -m SimpleHTTPServer 8080)
Because in order to be served content, that content needs to be reachable at the specific URL you're visiting/requesting.
It works with tinypic, because they've configured their server to return that gif image as long as an otherwise valid URL ends in an image format extension tinypic considers valid. http://i34.tinypic.com/1zywhs.bmp works, as do jpg, jpeg, png, tiff, xcf, flv, wav, doc, txt, bin and probably more. dcsssucksabagofdicks however does not work, because they didn't anticipate anyone to ever use that extension, and it's not on their list of allowed/supported extensions. You can make almost any content reachable at almost any address that you control (with few limitations). If you configured your server to support www.mywebsite.com/pics/goatse.{any number of extensions you like}, then it would work for you as well. It's also possible to tell a server to allow a catch-all element in URLs, which would mean that even www.mywebsite.com/pics/goatse.{made-up extension that you didn't specify beforehand} could be made to work. But you have to configure your server to allow that. (Tinypic have chosen not to allow completely arbitrary extensions.) If you use Apache as your webserver, you can do a lot of that stuff with mod_rewrite. Other HTTPD programs may have similar features.
It does, btw. happen that incorrectly configured servers supply the wrong MIME type, and then things may or may not work, depending on whether the user's browser considers extensions as a substitute for MIME types.
I use their free plan for everything. https://www.heroku.com/pricing granted my sites aren't getting heavy traffic. There's also a way to keep your free account site always on instead of sleeping after 30 minutes.
You just need to set up a server on your computer.
I've always used xampp when developing locally.
It's easy to install and shouldn't take more than 5 mins of your time to set up.
> If you were to combine all of the ones served from websiteninja.pro into a single gzipped file, it would load way faster.
Or enable output compression based on MIME type (like, with mod_deflate). Then you don't have to worry about remembering to zip everything, don't have to unzip stuff to make changes, etc.
Honestly, anyone I hired to do web development should know all this.
If this is an apache server, you might use a .htaccess file to block or redirect requests where the http referer doesn't match your domain. There's a lot of different ways to write it.
Here's some blocking and redirect examples from the Apache documentation, and another blocking example.
I'm a data hoarder, primarily focusing on software, books, comics, and pen-and-paper tabletop RPGs. I've got north of 16TB of stuff so far, covering every range of subjects but especially focusing on skills that would be useful if a cyber attack or EMP were to destroy our grid. Such an eventuality would kill tens of thousands if not resolved in a few days, millions in a few months, and ~85% of the population in a year. My setup is portable, kept on a few small SSDs in an EMP and water-proof bag, along with a radio, laptop, solar cells, batteries, and an inverter.
You don't have to be as extreme as I am. You can start small with a flash drive and offline copy of Wikipedia. If you want to go beyond this, check out r/DataHoarder.
Btw, kamu bisa ngedownload wikipedia dalam bentuk ter-compress sbg file zim dan bisa dibuka secara offline dengan program kiwix. Ini bagus untuk daerah dengan koneksi minim, layaknya proyek lain seperti "internet in a box"
Connecting Your Server to ServerPilot
Because Amazon Lightsail uses SSH keys, you will have to use the manual installer to connect to ServerPilot.
Source: https://serverpilot.io/community/articles/how-to-create-a-server-on-amazon-lightsail.html
How to Manually Connect a Server to ServerPilot
https://serverpilot.io/community/articles/how-to-manually-connect-a-server-to-serverpilot.html
Don't focus on learning a technology, focus on learning to build good software and pick up the technologies along the way.
The best developers I've hired aren't great because they walked into the job knowing our tech stack (especially at the junior level). They were great hires because they could learn quickly and apply what they learned.
My suggestion is to sign up for a free account on Heroku (https://www.heroku.com/free) and build something - anything. At every step when you need to make a decision like which database to go with or how to build an API just watch some YouTube videos describing pros/cons and try reading some academic literature on the subject.
Most importantly, pick something and start building. Then once you realize you could have done better or there's some other cool approach you want to try (like serverless), start over again and make it better. The key is to consider your project a part-time job and make sure to put at least one hour into it each day.
There is no better experience than a combination of academic knowledge (why) and practical application (how).
Thank you, I'm glad you found it helpful!
I write all my blog posts in org and then convert them to HTML using Emacs. These HTML files are then used to build the entire site with Jekyll. I created the theme and layout myself - you can probably tell I'm not artistically talented :)
I said this for years before it finally “clicked” for me.
Check out Jekyll. It’s basically plain text with styling in Markdown. Tons of free templates available, and you can host for free on GitHub by simply changing where your personal domain points to.
DM if you want to know more or need some help.
Take a look at Jekyll, a static site generator which you can use with GitHub Pages. You've probably got no need for something as complicated as Laravel, all you should need to know is HTML, Git basics and how Jekyll works.
That said it all depends how non-static it is. What do you intend to do with storing contacts?
If you're not comfortable with code at all, take a look at Wordpress/Squarespace, it can probably do what you want without having to roll your own CMS
I've been looking into going Google free, including Android. It's possible, and the only thing I think I would keep is Google Suite for my business, and Google Maps.
To get close to 1:1 functionality though, you really need to build your own cloud. That's pretty easy with projects like YunoHost thankfully!
I've started the transition, and I've been pretty surprised by how many apps and self hosted services are better than the Google and Microsoft things.(NewPipe, MarkText)
I'm adding to my free resources site as I find them, if anyone is interested in following me on my journey.
I also have a ThinkPad buying guide (best laptops with best Linux compatibility) there too, in the top section. It's actually the guide that /r/ThinkPad uses.
Du kannst dir einen Raspberry Pi dahintersetzen und dann selbstverständlich auch über IPv6 ein VPN aufziehen.
Da das Präfix allerdings wechseln kann, würde ich einen Dyn DNS Provider wie no-ip empfehlen und dann über den Hostnamen via AAAA Record verbinden.
Würde yunohost empfehlen:
> email accounts
You don't want your web server acting as a mail server, anyway, for all kinds of reasons.
I always recommend using Google(Apps) or MS Office 365. In fact, Office 365, which essentially runs an Exchange server for you on a per user basis, is one of the areas I have to concede Microsoft still does the best job. The reasons are far too numerous to go into here. I'd recommend research.
>wordpress sending emails from server
Depends on the type of email. Emails sent directly from a dynamically generated IP address -which you have- is likely to get caught in spam traps these days. You'll need the right PHP libraries installed, and a product like sendmail, assuming you're running the LAMP stack.
If you plan on any newsletter or user message, its best to use a service like Mail Chimp.
> lack of a "control panel"
You'll need to be at least a little cozy with command line (SSH). Products like webmin can provide a gui for many operations that you'd normally use command line for.
> subdomains etc
These are handled in your vhost files. The aforementioned webmin has a gui for managing Apache vhosts.
> etc
Managing your own server definitely requires a learning curve. Keep back ups. If you haven't locked yourself out of your own server a couple of times, its not secure enough (That's hyperbole, don't try to lock yourself out).
> do I run into the risk of having to google stuff all night trough for email accounts, databases.
No, you'll be googling problems for much longer than a night, but don't get discouraged. To say that virtualized options like Digital Ocean, Linode or AWS are better for your money is putting it mildly. Once you get the hang of it, you'll wonder how you dealt with so much garbage for so long; you'll never go back.
> Інтернет-представництва (сайти та сторінки в соцмережах) засобів масової інформації (а також і органів влади, комунальних підприємств, установ і організацій, а також приватних підприємств, які реалізують товари й послуги в Україні) повинні виконуватися державною мовою.
Вот это они очень зря. Шли бы они нахуй из нашего интернета.
> Поряд з українською версією можуть також існувати версії іншими мовами. При цьому українська версія має завантажуватися за замовчуванням та містити не менше за обсягом та змістом інформації, ніж іншомовні версії.
А это вообще бред. Для многоязычных сайтов применяют content negotiation и, если это действительно многоязычный сайт, языком по умолчанию обычно ставят английский.
Just add a redirect in the HTTP virtual host like this:
<VirtualHost *:80> ServerName www.example.com Redirect / https://www.example.com/ </VirtualHost >
<VirtualHost *:443> ServerName www.example.com # ... SSL configuration goes here </VirtualHost >
This should redirect to the HTTPS virtual host (so you don't have an infinite loop).
Unfortunately, Apache stopped distributing binaries a while ago. The easiest method I found is to go here: http://httpd.apache.org/docs/2.4/platform/windows.html click on Apache hause: http://www.apachehaus.com/cgi-bin/download.plx
Download the VC11 version
Read the INSTALL.txt file on how to install it (it's not a MSI, you have to do it manually to install httpd as a service).
If you don't need to vary the temperature readings, then ab is a good choice.
ab -c 10000 http://<your-site>
Apache Benchmark is installed with Apache usually, but you can also get it as a standalone.
<iframe>
s are still occasionally used, typically to embed external content (for example, to embed a YouTube video inside a page).
History lesson!
Actual frames are long, long extinct (I haven't seen one in the wild on a new website in probably 10 years, though I'm sure someone will come up with counterexamples). They were useful back when we wanted every page to have a copy of a site's navigation (aka every website on the internet these days) but the ability to dynamically render a page wasn't ubiquitous the way say PHP is now.
Frames started dying out as soon as even cheap (or free) web hosts started supporting first Perl and SSI, and then PHP. You could now easily have a header.php
and footer.php
and render it on every page load, which had big advantages over frames:
There is actually one place frames are still seen these days, which is when you use a "domain masking" on a web host, to mask the true location of a website by putting it inside a 100% width and height frame.
At my company, we use a framework in the build process which concatenates JS files but does not minimize, because:
debugger;
statement. This is very annoying.If transfer bandwidth is your main concern, why not look at mod_deflate?
If you want something in the middle ground you could go with Cloudways or ServerPilot. They will deploy a lemp stack to Digital Ocean for you and provide some support on it. Cloudways was doing $30 free credit for Black Friday/ Cyber Monday.
Starting with RC1, PHP 7.0 is available on servers managed by ServerPilot.
https://serverpilot.io/blog/2015/08/20/php-7.0-available-on-all-servers.html
It won't run a service 24/7, but if you have a web app sort of thing, Heroku has a free tier on their platform:
I've used this for tiny web apps (like a todo list / kanban board) in the past. You have X number of total "run hours" during a month. The app will sleep after 30 minutes of inactivity and will wake up whenever it's accessed again (assuming you have runtime remaining).
It's good for stuff that you need running for your own use.
I mean: https://www.heroku.com/pricing
You turn on one hobby-level dyno for your app, and that's $7/month. That won't change unless you change it. Most add-ons (database, logging, whatever) have free-tier options that will probably more more than sufficient for your purposes.
If you end up having enough traffic to require more or better dynos, that's a good problem to have, right?
Depending on what kind of traffic you're looking at, the hosting will probably be pretty cheap. For example, if you go with Digital Ocean's 5 dollar a month plan you're looking at 300 bucks over the next five years (and you'd most likely have plenty of bandwidth to host other things on the server).
You could also take a look at heroku, which offers a free tier: https://www.heroku.com/pricing
I might be able to help a bit more if you can provide more details on what the app will be doing.
Here's nginx documentation on HTTPS. Please be careful on file permissions on the private key to limit the access of non-essential users to it.
Then, with OpenSSL, you can generate a public/private key and Certificate Signing Request - details here.
AFAIK on the 'free' version you'd get no access to the underlying PHP for your specific Wordpress install, though you're free to download the source and play with it locally all you want (which is what I'd recommend).
If you're looking to document the learning process, here's what I'd suggest, assuming the following things:
You're on Windows
You don't have a current web host
1) Download XAMPP (http://www.apachefriends.org/en/xampp.html)
2) Secure the XAMPP installation as recommended in the install
3) Setup the downloadable Wordpress installation (if you decide Wordpress is the way to go--not my preference though it's immensely popular so there's a large community to draw from) onto your XAMPP server
4) Blog and edit the source files all you want without having to pay for hosting costs
5) If you decide you want to host the blog when you're "done", or far enough along that you "get it", then export your database, zip up the Wordpress directory and move it over to your new host.
Ah, thank you! The site certainly works well; I'm a software dev in real life so I'm good at that part. My self-consciousness is about the math itself. I can't, for the life of me, get myself to do really careful rigorous math, so I don't... quite... trust myself to write about anything abstract like exterior algebra without making glaring mistakes. Even though I like to try. Plus there's also the general self-consciousness that comes with posting anything online.
The Wordpress LateX plugin is definitely bad. My site is built on Jekyll and hosted (for free) on Github pages. The TeX is through KateX which is like MathJax but much lighter weight and loads a lot faster (and it doesn't make the page reflow = reorder everything / move around as it loads the math).
I believe that even on Wordpress you could set things up so that KateX or MathJax parses math via Javascript when the page loads, but I haven't tried. It might require paying for Wordpress to modify the Javascript on the page. Can't remember
In the nicest way possible, your writing on this site might reach a larger audience if you spend a weekend reading up on best practices in website design. Alternatively, you could have a look at blogging tools like Jekyll that will do this for you.
Good luck!
Wordpress is pretty popular as a platform. If you're not opposed to learning a little HTML & CSS I highly recommend Jekyll. There are plenty of really nice looking themes to use, you have a ton of control over your site if you choose to customize.
It's an awesome way to show potential employers that you are competent with web technology. Another huge bonus is that you can use Github Pages to host your site for free.
1) With 16 GB of ram, I would definitely install ESXi to give you flexibility. You may have some issues with part comparability but you might be okay. Check their HCL for what you bought. If you haven't bought ti yet, checking first would definitely be advisable. A CPU with hyperthreading would have been good though.
2) Honestly, you are best to just stick with debian/ubuntu. You want to stick with one of the LTS releases. This needs to be stable and relatively unchanging. If you are virtual you can split each function out to its own server; which would allow you to choose something different for each application.
3) I know less about the security. It sounds like you have a pretty good plan though. Keeping updated with patches is another key thing. Something that a LTS version of an OS will help you with. With ESXi and two nics you could create a virtual switch inside and put a firewall appliance (install something like pfsense) and bridge internal and external networks? I do something like this to keep my "lab" isolated from my home network. Remote access is done with an SSH server on a non-standard port.
Actually, after looking your server over again... you don't have ECC memory? Not requirement but usually you'd want that in a server.
Also, look into Webmin.
> > Despite the tons of examples and docs, mod_rewrite is voodoo. Damned cool voodoo, but still voodoo. > > -- Brian Moore > >
That's from apache's official documentation on mod_rewrite
.
> And yet you still provide no substance to the topic or any alternative answers.
I already left a comment in reply to the OP.
> Why don't you prove my answers wrong?
It's common knowledge. I was trying to give you a nudge in the right direction, not win an argument. But if you insist:
First off: .htaccess
: Here is the Apache documentation describing how it's a misconception that <code>.htaccess</code> needs to be used for URL rewriting and that you should avoid using it whenever possible.
Next, mod_rewrite: you're starting with the assumption that you have a "real" URL that corresponds to a particular script, and that you must rewrite the URLs the user sees into the "real" URLs. This is just not how most systems work. It happens to be how PHP works because it dispatches using the filesystem. But that's just a quirk of PHP. It doesn't mean that it's the norm. For instance, if you use a Python framework, the request gets passed to the WSGI handler, which then dispatches the request however you'd like. No URL rewriting involved.
Next, PHP routers. Don't tell a newbie to look into PHP routers unless you've established that they are using PHP. Just explain the concept. Otherwise you're just going to confuse them. The only system mentioned so far is how Reddit works. Reddit doesn't use a PHP router.
Finally: Reddit itself. Reddit isn't written in PHP, it doesn't use a PHP router, and it doesn't use URL rewriting. Its source code is available here if you don't believe me.
> Because the upvotes don't agree with you.
A grand total of two people besides yourself gave you up votes. It's hardly proof you are right. If you decide what is technically correct based on Reddit up votes, you're going to end up with a lot of confused ideas.
Add
ProxyRequests Off
To your Apache config. See this page.
mod_proxy can act as a reverse proxy just fine with that setting. Unless you're intentionally setting up a forward proxy with Apache, you should never turn ProxyRequests on.
If I'm not mistaken, you'd have to use something like mod_rewrite to "rewrite" the URLs without .html to something with it.
Depending on what framework you're using, you could configure your routes to answer certain requests in certain URLs. But configuration of that is totally dependent on framework (although all they look similar).
Or the silly way: move each page to a directory with the page of the name and change them to index.html (e.g. page.html -> page/index.html).
Same here, apache has a 300 sec timeout, but I guess it is going with something else for telnet sessions.
More googling for me, no dice so far.
Edit : I fucking hate linux forums btw - http://www.webhostingtalk.com/archive/index.php/t-36898.html
> LOL... just keep using telnet and sooner or later, someone will hack you and then you'll have a really big time out... > > > SSH SSH SSH SSH SSH...
Does it not occur to idiots that people might be trying to do something else than use telnet in place of ssh???
Sigh.
Edit 2 :
Pretty sure this is the culprit, for me anyhow - http://httpd.apache.org/docs/2.3/mod/mod_reqtimeout.html
Check out /etc/apache2/mods-enabled/reqtimeout.conf if you're using that module.
BlueHost is shit, they are at fault.
How to migrate:
Or follow this tutorial: https://serverpilot.io/community/articles/how-to-migrate-a-wordpress-app.html
A Digital Ocean VPS (cheapest is $5 a month) + a free account at Server Pilot to partition your VPS into separate "apps" (or sites) would probably be the easiest way to go.
Take a look at ServerPilot. It sets up a LEMP stack for you and actively updates your VPS. They have a free version that has a lot of the features you want and plays nicely with DigitalOcean Droplets.
Pythonanywhere, it's really simple, but with some restriction, for example you can connect only to some white listed sites (don't know if twitter is one of those)
Heroku. A bit more complicated to setup (it should be used to create a web app, not as a server), but it can get the work done, I'm using it myself to handle some google push messaging notifications.
Most people choose Linux instead of windows for their node environment and either use nginx as a proxypass or use iptables to do a port forward. I might suggest using another host all together.
Look into https://www.heroku.com/ its probably going to be way faster to get your node site up and running rather than trying to learn how to proxy or port forward properly.
Permalinks may be a thing of the application you're proxying to, they aren't an nginx thing.
In your example, let's follow the logic in your config:
root
value and try to load a file or directory in /usr/share/nginx/html/hello-world
, then it tries to load /usr/share/nginx/html/hello-world/
, and then if it can't find those files/dirs it performs an internal rewrite of the URI to /index.php?args.You can read the details in the try_files documentation
What? We use nginx as a reverse proxy, with an HTTPS connection all the time on our servers at work.
What about this doesn't work for you?
"The default configuration is not good from a securtiy point of view and it's not secure enough for a production environment - please don't use XAMPP in such environment. " - http://www.apachefriends.org/en/xampp.html
its very insecure. No one runs xampp as production on a server.
XAMPP is what you are looking for. It is a single software package with everything you'll need for a web server. Being an open source advocate, I'd recommend dropping Windows for Linux, but use what you are comfortable with.