What you want to run is a caching proxy program, like Squid, that sits in front of Firefox and caches (to memory or disk) HTTP resources like videos files (or CSS, static images, etc). It can run on your PC or a server on your local network (like a modded router running DD-WRT or similar).
There are lots of tutorials. It isn't perfect or as smooth and magical as you would hope, but it can be made to work with not a ton of effort.
Thanks man. Yes, proxy would need to be on a local PC or server. I'll test with a Squid proxy a bit later and report back. Using the proxy, I will be able to use much larger Receive Windows which should mask the issue a bit.
/r/sysadmin might be a better forum for this topic.
But, yes in a general sense, a Squid or similar Proxy caching server is somewhat similar to Temporary Internet Files.
Just like TempFiles, 98% of the time, it works just fine, exactly as designed, and makes your overall internet experience better.
But nobody remembers the 98% of normal experiences.
They just focus all their attention on the 2% of the time when things don't work as expected.
WSUS may be the best solution for caching Windows Updates, this will give you greater control of what you want to download, install and keep. http://technet.microsoft.com/en-us/windowsserver/bb332157
If you are looking for a general web cache take a look at Squid. When you configure the cache make sure your set maximum_object_size to a number high enough to catch those files you want cached. It defaults to 4096 KB.
http://www.squid-cache.org/
~Z
while a P4 will eat tons of power, you can do a lot with it:
Because this is not how the web works. A web server does not have to serve you a web site just because you punch in its IP address. How do you think shared hosting works when multiple sites are served on the same IP? Services like CloudFlare proxy many different sites. If the host name is not there, the proxy or web server will not serve you the web page because it doesn't know what you're looking for. You can not use hostnames that way in the hosts file. To do this you'll need to proxy all of your traffic through a server that will redirct one host to another based on a filter.
RYou can do this with a caching proxy. Squid is the most well known and widely used. If you replace your router with something like pfsense you can install it on that.
Edit: you could also use pfsense to monitor traffic like you mentioned in your link. It gives pretty graphs and stuff.
It's possible that they're just blocking all HTTP proxies and reporting the error as "out of country". Also possible they're looking the X-Forwarded-For: header, which reports a 192.168.x.x IP, which they're considering out of country.
You should try removing that header see if that helps.
You can more or less use any linux distro for the pi out there as long as it has (or you can install) sshd on it. When installed, simply create a user and add your ssh key to /home/<user>/.ssh/authorized_keys file and you should be able to use putty to ssh to the box. If you only need a http proxy you can install squid (http://www.squid-cache.org/) of look for alternatives on google. If you want your pi to act as a full proxy you need to have a look at iptables (https://help.ubuntu.com/community/IptablesHowTo) and echo "1" > /proc/sys/net/ip_forward. You can use terms like NAT and proxy in google to get more information on the topic.
Should you need any additional help feel free to ask.
While Jethro_Tell is correct, there are instances where it's not necessarily your choice.
Suspecting that might be the case, and having worked with squid plenty, I've never been able to get a conf correct without debug. See here: http://wiki.squid-cache.org/KnowledgeBase/DebugSections
Although, looking at the guides you used, I suspect they're from older versions, as they make no mention of this directive: http://www.squid-cache.org/Versions/v3/3.3/cfgman/ftp_user.html
Squid 3.3.8 by default appears to send the PASS command to the ftp server, which denies the request since it's configured for anonymous access. Setting the ftp_user directive should tell squid use to anon logins as necessary.
So you're trying to circumvent the content provider's GeoGate? I would just download the video using a browser extension and host it (with attribution) on whatever server you want.
If that is not what you want to do, I would look at squid as a reverse proxy.
I'm a big fan of Squid. It's a little tricky to setup, but can handle anything you want to throw at it, and acts as both a cache server and reverse proxy. It also has a built in authentication module, in case you want to password protect certain parts of your site!
A transparent proxy server is probably going to be more effective for you than a browser plugin. It'll also be easier to manage because it's a single point instead of having to modify every machine you have on the network.
Beyond that, it'll thwart people from attempting to disable/remove the browser plugin.
I'd recommend squid, as there's literally hundreds of resources and experts on it online.
Good luck!
> Everything worked prior to squid being installed. This is the thing I don't understand.
Everything working prior to Squid is meaningless. You made a change to how you connect to the Internet, so now you have created a problem. Remove Squid from the equation and things will likely return to normal.
If you're using the dns_nameservers directive in the squid.conf, Squid will use the DNS server IP addresses specified in dns_nameservers to do DNS look-ups. tIf the DNS servers you've configured on the Squid box don't get a DNS result for your website FQDN, you'll get an error message like you describe. See http://www.squid-cache.org/Doc/config/dns_nameservers/
So, the way I do this is by setting "refresh_pattern" directives in the squid.conf file, such as:
refresh_pattern jpg|gif|jpeg 129600 100% 129600 override_expire
for images (I'm doing this from memory, it might not be perfect). The relevant section of the quid config manual is http://www.squid-cache.org/Doc/config/refresh_pattern/
As it sais on the tin - it does break HTTP (it overrides what the server tells you is "fresh"). I have directives for images, css, javascript, mostly. Caching dynamic content is a little more tricky - but in these cases the default squid configuration is good enough IMO. It won't make your internet suddently hyperfast, but it should help with that google logo :)
EDIT: Just to make it crystal clear: I do NOT advise you to do this. Except in very specific cases, where having out-of-date/broken websites is preferrable to having no websites at all (or unbelievably slow ones).
There are actually several companies providing commercial support for Squid (http://www.squid-cache.org/Support/services.html). You'll find that to be true for most open source solutions of a certain size.
Not as much as varnish, because it stores all of its contents in a memory mapped file, my application needs to manage larger amounts of information than RAM. (It's like Squid, for example, but has more specific cache algorithms)
Another option to look at (assuming you have reasonable linux experience) would be to build a Squid server for your proxy and use DansGuardian for the content filtering. Save that money and get yourself another monitor and comfier office chair instead ;)
Squid! is what I’ve got running, and it’s easily the most stable part of my whole setup. It’s not the prettiest, but it has a great cache built in, as well as top tier security. Setup is a little complex, but not too bad once you get the patterns sorted out, and then it’s a real “set it and forget about it” piece of software.
Can you run PuTTy and open an SSH connection? Are you able to change Proxy settings for your browser? In that case perhaps look into a Socks 5 Proxy. It tells your browser to route all traffic to the SSH connection and is pretty much a VPN-Lite kinda. Takes all of 5 minutes to set up and configure.
Otherwise, I don't have many suggestions. I imagine something like Squid could be configured to do that but I've never had to so I'm not completely sure
Do you happen to have the capability to sniff network traffic? What you could look for is the DNS requests of the clients to see who's accessing what. As far as blocking it, there are many ways. If you run your own DNS server and force users to use your server (firewall rules), you can sinkhole undesired sites. You could also setup a proxy server and force everyone to use it (I'd recommend squid). Proxy servers allow you to filter out content and improve browsing speeds (this is possible by caching). There isn't a non-technical solution to do what you want unfortunately.
If you provide your router's model, I could research to see if it has certain capabilities to help you out, it is possible....
When you manually curl -I
that file, what headers are returned?
Also, ignore-no-cache
doesn't seem to be a valid option. Look here.
Try these options: override-expire ignore-reload ignore-no-store ignore-must-revalidate ignore-private ignore-auth
I think I just figured it out. http://www.squid-cache.org/Doc/config/connect_timeout/ It times out on the first connection, then tries again. Just a hunch... but If you don't have a working ipv6 router, it would get the ipv6 dns first and then not be able to get there. The working example you showed was with ipv4. Try setting dns_v4_first in squid. http://www.squid-cache.org/Doc/config/dns_v4_first/
Or try disabling ipv6 entirely if you can't route it.
TV and movies would be stored locally on the plane, and it should update periodically. The live TV (the way I understand it) works just like Dish or DirectTV, so it wouldn't operate over the "data" network.
I use their wifi all the time, perks of A-List Preferred, my experience varies flight to flight. Normally, I just assume I can do iMessage, e-mail, and slowly browsing Reddit.
The company Southwest uses is supposed to be upgrading their satellites soon. Unlike Gogo, Southwest doesn't use a cell network for its Internet, but satellites (also why it's available gate to gate).
I have been impressed with their system more than others; including the use of Squid Cache (http://www.squid-cache.org/) to optimize the internet a bit. For example if somebody has already loaded CNN.com then it will just fetch that page from its cache instead of reloading it.
Sure. For the DNS cache I use bind, its pretty simple and just takes a bit of time. Here is a tutorial for doing it on Ubuntu 14.04.
For the HTTP cache I use squid which is a really flexible proxy setup. It can be setup as a transparent proxy but I didn't bother because I have admin access on every device connected to my network. Here is guide, it's a bit dated but the basic functionality is the same.
Kinda.
I've been running a caching proxy for a while. There are some caveats for what you want to do.
First of all, it doesn't work well for YouTube. There's some partial instructions out there. I never went this far, so I'm not much help here. Just wanted to point out that YouTube has it's difficulties.
The other thing is that in it's default configuration, squid doesn't do terribly well with 206 requests. Basically any large file is going to be a series of 206 requests. You'll want to use a combination of the range_offset_limi and quick_abort_min.
range_offset_limit 1 MB
quick_abort_min -1
Edit: You'll also have a hard time avoiding any end user work. You'll either need to set up a transparent proxy port on the proxy server and redirect traffic to it from your router, or you'll need to host a proxy.pac/wpad.dat file on a web server and publish that information with DNS/DHCP. The second one will also require you to make sure your machines are configured to auto-detect proxy, and some devices (such as most mobiles) you'll need to enter the proxy by hand.
Squid. You can set up a Squid proxy on another system and point your computer at it - this way you can filter any HTTP request as any in-browser adblocker would do, but it doesn't work for HTTPS requests (you can filter entire domains only, OR you can hijack all HTTPS requests by using self-signed root certs, but this is hard to do and usually breaks HTTPS, which you should NOT do), and doesn't apply cosmetic filters (doesn't remove blank containers, popups, overlays, etc - modern adblockers do this a lot, and this is where most of the performance problems happen).
If you put something like squid in front of your router to do caching, unless you MitM and install trusted certificates on your clients, then you can't cache https resources in squid. That being said, the client can still cache https assets from the direct source.
I never tried pfblocker but squid 2.7.9 (I believe its maintained by the pfsense team?) and squidguard is pretty useful. If you want transparent mode with openvpn, its a headache to setup but it's easy once you get the hang of it. You just need a few lines of custom options in squid.
It's pretty cool to see real-time filtering in squidguard. It shows exactly which machine, the site blocked and the time.
I really don't think you need to upgrade for squid/squidguard. I was running with worse specs than your N40l.
The only headache is when pfsense restarts for any reason, you need to reload the blacklist for squidguard each time. In the past, I've had to reload Squid as well as Squidguard but since 2.2 release, Squid has been fixed but not Squidguard. I run on the x64 version if that makes a difference.
As for the options, they are all listed here: http://www.squid-cache.org/Versions/v2/2.7/cfgman/
for 2.7.9
I don't know if you setup rules by mac address but I know you can do it by IP address. I just assigned static leases and setup ACLs by IP. For squidguard, you can set blacklists by IP, set time-based ACLs too.
With the host file you are only going to be able to block by domain, in order to accomplish blocking a particular page or folder you would have to use something like squid in a transparent proxy.
Uploading your hosts file to pastebin would be helpful!
We've been testing several systems that simulate a speed of light delay.
One that's been working well is our e-mail delay server ( is my e-mail address for the mission). The guys at JSC can turn the dials on the Earth-to-Mars and Mars-to-Earth transmission times, so from operational perspectives they could simulate the actual time change over the course of a long mission. Right now we're set to the full 40 minutes.
As far as web access is concerned, we've had two different systems, both of which we've overloaded. The first was a commercial product running on a server at JSC, and for some reason (unknown to me) it became too expensive with all the traffic we were generating.
The second was a custom server running at JSC that would display a splash page when we requested a particular website (i.e. Reddit) with a 40 minute countdown, and when the countdown was over it would send the cached page through. Unfortunately the throughput of the server was only about 1 Mbps, so when we were uploading videos all other web access (like the psych tests we have to do daily) would grind to a halt. It didn't help that I wrote a script that would poll my favourite websites every 4 hours to pre-cache them for me.
Ultimately it comes down to what a Mars crew in 2025 could expect from their network. The deep space network would probably have the same 25/25 Mbps throughput that we have here. So they'd probably have websites like future-Wikipedia and future-Youtube pre-cached, and updated often. With that being said, we are now using a local caching server (squid-cache), and limiting our use of the internet to static pages. No synchronous communication allowed whatsoever (video calls, messaging services, Yo apps, etc.).
Hopefully that paints a good picture. And hopefully future crews will have more robust options to simulate the speed of light delay (hint hint, entrepreneurs out there).
The extra RAM requirements of an Adblock browser extension would have the browser constantly refreshing tabs because the device would keep running out of memory far quicker. Or if the service was implemented as an App with Extensibility it would mean the Adblock app runs in the background all the time.
Each of these scenarios would mean faster battery drain, plus older devices simply wouldn’t be able to do it or keep up anymore.
It’s more efficient to have a local proxy server like Squid or Privoxy block the ads before they ever reach the devices. Saves cycles and memory on the devices while still blocking all the clutter.
Eventually cell data sessions will be permitted to use proxies. AdBlockPlus could easily be seen charging users for access to such a service.
We used to use Smoothwall with the integrated Squid cache/proxy. No special software is needed on the workstations themselves, and everything "just works". Since Squid caches everything that is downloaded (up to whatever size you desire), if the content hasn't changed since the last time it was downloaded (like an update, for example), it downloads from the Smoothwall appliance (we used a simple P4 tower with a 250GB hard drive - covered all updates for every OS at the time, likely still would). In essence, you're looking around at least 100MBit, possibly GB downloads, depending on your network cards and infrastructure. It was an easy way to turn a recycled PC into a stellar web cache, for free. =)
Note it even works with anything else you might have in a network environment, including a full domain/server architecture.
Use a single IP to load up all the URLS you want. Have all the DNS names resolve to the Squid box. Then from there squid can look at the host headers of the incoming traffic and direct the packets to the appropriate server.
I haven't used a local nannyware like you're specifically looking for, but may I recommend throwing a device behind the dsl/cable modem instead?
It will be harder for the Mr. to detect and circumvent, plus you can get it all working before hand and just plug it in and leave.
I've used squid running on OpenWRT on a generic router before.
Depending on your skill or budget you can try something different.
Okay, that forum probably uses the HTTP protocol, operating over the web via port 80.
Squid is a heavy-duty proxy program that you can run on a GNU/Linux box.
For something lighter and more suited for a desktop machine, and/or just to get around a filter, look into The Onion Router, Tor.
The only thing I have is that in all the examples I've read your percentage is way higher. The examples I read in the documentation show rates of 0 - 20%. The other thing is your min and max freshness are 129600 which seems kind of odd to me. Would you do 0 129600 and then just leave off the override-lastmod ?
The ignore-no-store option seems promising but this liability bit scares me
override-expire enforces min age even if the server
sent an explicit expiry time (e.g., with the
Expires: header or Cache-Control: max-age). Doing this
VIOLATES the HTTP standard. Enabling this feature
could make you liable for problems which it causes.
Let us know how it goes and if you have the success. I'm thinking there are other multi-console households that could use this.
Squid. It's a proxy cache. Request goes through squid before going through to your web server[s]. Could be quite useful to do quick caching of pages for dynamic data that doesn't need to be updated exactly at the right moment.
Had a customer delete their root directory via a cronjob, lucky for them it was a find statement doing the delete (via -exec rm -fr {} \;) and it made it to "/bin/rm" and could no longer do any more damage. I repaired this via copying the contents of bin for its sister server (was an HA pair of webservers), remove the cron entry notified the Account Manager and tech for that tech, and went back on with my day. Fast forward to 4AM Friday morning, EXACT SAME THING but this time... on the sister server. Customer had taken no actions nor did the account tech to resolve this so it struck again :( I was oncall and this landed in my lap, i think this was the worst mistake.
One of the more fun mistakes was when I was working in a much "looser" company and they had quite a few porn sites hosted with them. I saw in inbound Sev A customer help request in our queue so I picked it up, the first thing that stuck me odd was "My site http://xx.xxx.xx.xx is having problems" this immediately makes be believe this is a porn site. A quick browse of said IP confirmed my suspicions. They indicated they had setup Squid Caching Proxy and it was caching their members section and was bypassing the ACL's. A quick couple of modifications and this was resolved but they where in SUCH a rush and so dumbfounded this was entertaining.
And as for programming I do my fair share, I write plenty of tools and daemons, scripts stuff of this nature to aid me in my daily job.
Article is like 7 years old.
We use Squid (http://www.squid-cache.org/) but it has some drawbacks - it doesn't have a good reporting engine (free/open-source) - if you find some, let us know.
I would suggest you to test this reporting software if you plan to use Squid: http://www.wavecrest.net/support/cyfin/
It seemed to be good. We tested it, but didn't put it in production, because at point when we were looking at it, it didn't have a Unicode support when importing users&groups from Active Directory.
It depends on which ChromeCast I suppose.
If you don’t have a CCwGTV I’d say it’s out of reach without cracking it open and flashing some custom firmware, which I don’t think exists (I could be wrong, either way it’s a shitload of effort).
If it has Google TV you might be able to root it, and then install a system root CA. Also a ton of effort, but tractable.
Once you have a root CA installed, you need to configure your entire network to proxy all traffic through a caching proxy like Squid.
Assuming (and this is a huge assumption) that most ChromeCast apps are static web apps, the ChromeCast will load most apps normally, but will fail to load content when you’re offline.
Streaming apps will probably pin their certificates, and I doubt they’d load at all.
This is all speculation, but roughly how I’d start. The ChromeCast is so utterly dependent on the internet for everything, that it seems futile to even try, tbh.
I am not really that familiar but it looks like a ssl negotiation error.
SSL bumb has some limitations for some clients
Did you add ssl format codes to log_format? http://www.squid-cache.org/Doc/config/logformat/
Add at least ssl::bump_mode
The real fix is to have the application return results with headers that allow for caching. For now though, you'll need to see what headers the application is providing to Squid to make it think it can't cache that object. Something like 'curl -vv http://my.url/application' should print them out.
This page has a couple of other options and what headers they affect that you could add to that line to try to solve your problem:
http://www.squid-cache.org/Versions/v4/cfgman/refresh_pattern.html
I would guess "ignore-reload" might help, but without knowing more about the application and the headers it returns there's no way for anyone to know what options you need to set for Squid to ignore them.
As much as squid allows you to break HTTP standards to cache traffic, it also does it best to NOT break standards unless you explicitly tell it to. There have been bugs before where Squid was erroneously breaking spec that, once fixed, broke a lot of behavior people expected of it like you're seeing here. I'm guessing you're hitting some of those.
Instead of trying to make your Dockerfile retry, instead consider setting up an http proxy, possibly one that can cache the results of that page? I know squid has a <code>retry_on_error</code> option. Anyway, with a proxy in place you can reference it in your .docker/config.json.
You can also setup caching of packages and so on to make your builds faster.
Only HTTP unfortunately. HTTPS/TLS can't be cached by LanCache.
If you want to do more general HTTP caching (in addition to LanCache) look into using Squid Cache as a transparent HTTP proxy.
LanCache is very heavily tailored towards gaming CDNs, normal HTTP caching is better with slightly different settings - which Squid deals with perfectly.
Du kannst nicht den Gateway auf Anwendungsebene ändern, sondern höchstes die Routen zu den Gameservern ändern, wie CashKeyboard schon geschrieben hat. Das wär die einfachste Lösung wenn du die IPs der Gameserver kennst.
Was Browser aber können ist Proxies benutzen. Mit Win10Pro kannste den in Hyper-V in Linux nebenbei als VM laufen lassen oder einen Rasberry oder sowas benutzen.
Schritte wären grob so: 1. Am Lan Adapter DHCP ausmachen und statische IP ohne Default Gateway eintragen. Jetzt geht per default alles über das Handy als einziges Gateway.
Proxy, z.b. http://www.squid-cache.org/ ins LAN Netz installieren. Dem Proxy als Gateway den DSL Router geben.
Die Proxy IP in Windows/dem Browser einstellen.
For real usage anything with a caching proxy is going to be best. The downside is that the operator of the caching proxy can read anything you browse including secure websites. If you have a bit of technical knowledge you can run your own caching proxy. Squid is one of the better options, http://www.squid-cache.org/
Every way i can think of would require to keep some sort of local copy so you will need to make sure that's allowed. Simplest way would be to copy over the files so you can have them locally and delete them when they're no longer needed. You could also look into setting up a squid server on your local network.
I’ve seen a lot of recommendations for pi-hole, but haven’t tried it yet. https://pi-hole.net/
I run Privoxy for network wide ad blocking. https://www.privoxy.org/
Once you’ve set it up, it’ll block ads for any network device that can use a proxy. I also setup squid caching proxy to speed up frequently viewed pages. http://www.squid-cache.org/
http://www.squid-cache.org/Doc/config/http_access/
Basically you creste acls, then combine them in http_access with a username or a groupname.
From where you get the user and groups depends on the environment. See https://wiki.squid-cache.org/ConfigExamples#Authentication
In most cases we proxy all outbound connections from server LANs, with per-datacenter global whitelists and caching. Inter-server traffic is generally unimpeded. Your use-cases seem to be inter-server.
I miss building fun projects which were really niche. Once had an organization who wanted a web proxy but couldn't afford to buy a quality one.
We leveraged Squid proxy server (http://www.squid-cache.org/) and managed the config in a MySQL table to allow us to rebuild the operational config on the fly and restart the processes.
Then after we did phase one we went back and did Squid log ingestion so the admins could see the web traffic and block stuff in real time by exact URL, domain, or keywords.
Worked like a champ for up to about 50 simultaneous users and then it started having some performance problems, never hit that threshold too often thankfully.
Shouldn't matter. You see anything with journalctl -xe
?
Set debug_options to ALL,9 -- http://www.squid-cache.org/Doc/config/debug_options/
(briefly probably because it'll be a lot)
run ss -l -n | grep 3128
to see if something else is already using port 3128.
(the catbox access issue appears to have been something temporary because it's working now...)
What you typically want to do when you deal with http proxy (and I assume it's HTTP proxy, since you indicated you use it to access instagram):
Or you can set a proxy in your app (browser), and write a proxy.
You may be interested in Squid (a web proxy), and the url_rewrite_program directive : http://www.squid-cache.org/Doc/config/url_rewrite_program/
It's very do-able so long as you don't want to see what's actually being communicated within the encrypted stream. As /u/flunky_the_majestic stated, DNS is one way, but it's hacky. I've actually successfully rolled this solution on my own with the only two drawbacks being:
The keystone of the setup comes down to using >=squid-3.5 so you can use the ssl_bump parameter to do what's called "peek and splice".
If you want to allow all traffic to go through, you simply "splice" them all. But if there are certain hostnames you want to intercept, such as to force a redirect (which will show as insecure in the user's browser w/o having a distributed CA cert), you can "bump" them. You can also "terminate" as well say for cases where you, as the network admin, want to forbid anyone from having communication with a host with an invalid SSL cert.
If you or anyone else that reads this has additional questions or wants to see some POC work I had done on this, feel free to PM me. Though certain, more elaborate management of this system, I may not be able to share given that it's proprietary work I've produced for my current employer.
I wondered that, but it states that if you look at the docs http://www.squid-cache.org/Doc/config/https_port/ it says the tls cert is mandatory for that directive. I don't want to decrypt, just grab the initial get then pass through.
I was assuming that
acl SSL method CONNECT acl CONNECT method CONNECT
dealt with that.
I would choose s3 if scalability and availability are important. But, it sounds like this isn't a high priority. I wouldn't build out the infrastructure just "in case" something goes viral. Most basic level vps will be able to handle hundreds of requests a second for simple static content (ie. served from the file system via apache or nginx). Storing from a db doesn't make sense here. If you really want to cache (probably not necessary since its just static content), you may like squid or varnish. They sit in front of your server and cache pages. They can apparently handle thousands of requests a second.
As a slightly related question, what's a good proxy server to use when mirroring content? A useful question should a given site (or web forum) be scheduled to close in the near future.
The first I used was Squid, but its default config is that it doesn't listen on any port (and is rather weak on getting started material).
The other was Polipo, which seems to do it's job, but doesn't store files in a human-accessible format (they appear to be URL hashes), and seemed to have problem with slashdot.org's front page (although that could be the browser-cache interaction.)
Actually, you don't need to go that far. You can use sslbump peek & splice, in which squid will peek until it sees enough to know the hostname, then reset the connection with server and reconnect the client with a fresh TLS session - no tampering/MiTM required.
Although, I haven't quite got this working properly myself
I'm assuming you want setup transparent proxy for certain destination IPs? Have you tried setting the ACLs? You can allow and deny access to cache based on MAC/source address/destination address without touching client devices. Once you define certain destinations I'm unsure if squid denies the rest or if you have to specify denying the rest. Try it out! See the options here assuming squid 2.7.9.
you could use NGINX (http://nginx.com/resources/admin-guide/caching/)
or squid (http://www.squid-cache.org/)
I've never used either for web caching but I would imagine nginx would be much easier to get setup.
Hit F12 and click the Network tab, refresh the page.
Notice how there will be some resources in the Time column that say "from cache." Those are stored in your browser's cache, because those have not changed from the last time you requested them.
Notice how some of the resources come from the "www.redditstatic.com" domain? In the Time column, those should have the fastest response time...because they're cached for "everybody" on the server side. Most likely those files don't change at all for weeks.
The longest to respond resource should be this page as "text/html" -- that's because its browser cache doesn't exist, and the server cache is in the seconds/minutes range...and will have to pull a fresh version each time (at least in active threads).
Facebook's resources are highly cached. The only thing that isn't is whatever is in your news feed and going through the chat...and even your news feed is cached, it's why you see the same things all day long. If everybody is using facebook on the plane, all the shared resources will be cached. The 10% of the resources that change will consume the limited pipe, instead of fetching new things for every person every time they load the page.
Repeat the experiment on Facebook and you'll see dozens of <5KB responses...the bare minimum to seem "fresh" without any cruft.
The applications they run in the industry work on home networks too. You can use things like squid, nginx and haproxy to satiate a college dorm on 5 year old hardware easy.
Your ubuntu box is probably x86, it won't be the architecture. Sounds like they are detecting HTTP headers. You can modify them on the fly with charles proxy in windows, but I'm not sure if your PS4 will suffer any side effects. For ubuntu, squid will work, look at header_replace. You'll have to get your PS4's HTTP headers. Run tcpdump -s0 -A -i any port 80 on your ubuntu box when it's bridging for the PS4. I think these are it anyway;
PS4Application libhttp/1.000 (PS4) libhttp/2.04 (PlayStation 4)
PS4Application libhttp/1.000 (PS4) CoreMedia libhttp/2.04 (PlayStation 4)
Alternatively you can get a VPN, connect your ubuntu box to it, and have it route traffic for the PS4 over that. Here's hoping t-mobile isn't blocking VPN connections (they shouldn't.)
Squid is a web cache/proxy. Your computer has potentially been used to mask the source of all sorts of nasty shit.
>An open proxy is a high risk for the server operator:
Such services are frequently used to break into foreign computer systems, child pornography is usually consumed through proxies, and illegal content is likely to be spread through such proxies. Such a proxy can cause a high bandwidth usage resulting in higher latency to the subnetwork and violation of bandwidth limits. A badly configured open proxy can also allow access to a private subnetwork or DMZ: this is a high security concern for any company or home network because computers that usually are out of risk or firewalled can be directly attacked.
You may need to use a proxy server like Squid and set rules on that.
Also see this: http://apple.stackexchange.com/questions/24066/how-to-simulate-slow-internet-connections-on-the-mac
Having a cache at each node is possible but it'd take some extra CPU and storage on each node. You'll have to use a node with custom firmware to do this as well (OpenWRT based would be easiest). I've personally been experimenting with Commotion Wireless recently and recommend it as a starting point.
That said, it would be way simpler to have a single cache at the web gateway.
I've never setup a caching proxy before but the most common free ones seem to be squid and Polipo.
How many users does this mesh have to serve?
My initial thought is no. But this is my idea...
A solution you could try is to use a proxy and have it rewrite the request headers (ie. Squid - http://www.squid-cache.org/Doc/config/request_header_add/).
That unfortunately is the simple bit. There doesn't seem to be a way to get the Nessus HTTP engine to use a Proxy. So you might have to setup your Squid proxy in transparent mode with some routing hackery (http://www.tldp.org/HOWTO/TransparentProxy-6.html). Failing that likely omnishambles, you could try using Nessus with Squid or Burp using proxychains.
It might honestly be easier to have your webapp/server just drop logs originating from the scanner IP ;)
You could setup a forward proxy using Squid for the network on an server. Then using group policy to deploy the settings to all the PCs to use the proxy in order to connect to the internet. Squid would create such an index that you are looking for with various plugins.
Another solution, if you have a proper corporate antivirus it may have this feature.
See if you can find any info on this site.
I'm pretty sure its as easy as installing squid on a pc at home, running it, and setting the DNS server on your devices as the ip of that pc. I havent tried it yet though because I think you can get banned.
You will still need a device to be the proxy (the custom DNS server) to sit in-between Siri and Google. Once you've got that sorted, you can fire off whatever commands that you wish to program from the device that is running the proxy.
Looking at the IP address (http://54.213.212.37:8888/) in the .pac you can download from http://www.betterthansiri.com/ it looks like they're using a version of squid to run their proxy. In the .pac you can see that here is where your iOS device will route traffic to your proxy for urls which contain google.com + plex (also Blacks, plaques and plexes, which i'm presuming is for when Siri mis-hears the command Plex).
If you download the .pac and build your own proxy, all you have to do is update the IP with your own.
All that is fine, but significantly more difficult than the network analysis that I've laid out. It's potentially as simple as sticking a caching proxy (like squid) in front of the ftp server.
I'd probably look to a proxy to manage the cache before I start trying to modify any programming. And I'd do a network analysis to ensure that the tool is working as expected before I do that. Modifying a program is significantly more difficult than those other steps.
Check out the maximum_object_size section in your config file. It defaults to 4MB.... if that ZIP file is larger than that you'll need to adjust that value.
http://www.squid-cache.org/Doc/config/maximum_object_size/
You may also want to get hold of "Squid: The Definitive Guide" from O'Reilly press.
Also, if you want to retain ZIP's and other files for a longer amount of time you'll want to look at this thread.
http://www.linuxquestions.org/questions/linux-server-73/squid-cache-big-files-668345/
> the main back bone link to the town will be the real bottle neck.
Could Squid help fix that somewhat?
At least for common requests such as Youtube videos, Windows Updates, etc?
It would take massive amounts of storage and would not work for large obscure requests.
Classically, the HTTP answer for that is squid (http://www.squid-cache.org/) - which would work just fine for HTTP. We could really use something that's more generic and longer lasting, but I'm not sure that content-centric networking is the answer. To the extent that it is the answer, we can implement it with anycast addressing and good caching for whatever we replace DNS with.
Not knowing anything about your specific architecture it's a bit hard to make a solid recommendation. I'm a big Rackspace Cloud fan, but I've never used their CloudSites offering, just CloudServers. You might also take a look at MediaTemple's Grid Service.
But you might not need burstable hosting at all. You might just need something like a caching reverse proxy (e.g., Squid) in front of your site, or certain parts of your site. For example, if your site is using a CMS that's assembling pages in real-time, per-request out of a database, but the actual content of the pages doesn't change that frequently, an accelerator could dramatically improve your situation when your visitor count spikes.
I'm with you. 5mb does not seem like enough for 35 simultaneous users, especially on a 4g connection that doesn't provide symmetrical bandwidth.
I'd suggest that theYear2000 look into web caching solutions like Squid. If a teacher asks everyone in the class to visit the same site at once, you're making 35 concurrent connections to the site, slowing things down for everyone. With a caching server, 1 person makes the internet request and the rest pull it from a local source, putting the load on your local network and not the 4g connection.
Also, 64 users seems like a helluva lot for the RV220, no matter what the whitepaper says. Can anyone back me up on that?
You can only use PAM with basic auth, which is cleartext. You probably want 'digest' auth, but this requires the squid server to have a cleartext copy of your password (which PAM doesn't provide).
There's really no need to have a system account at all. Look at the doc for auth_param. They show an example using digest auth against a plaintext password file: > auth_param digest program /usr/local/squid/bin/digest_pw_auth /usr/local/squid/etc/digpass
You should be able to create the 'digpass' file with just the username/password you want to authenticate with.
I agree with niczar here: I would tend to go toward Squid too or Apache if you really want to try something else or if you already have more experience with Apache. There's not much else new on the forward proxy front that I know. For HA, I've never tried it with Squid specifically, but I would recommend you have a look at Keepalived.
I've not used Net Nanny with a domain profile, so I don't know about that.
I do have a client who recently started using OpenDNS to block unwanted content. They have a huge list of inappropriate DNS entries that it blocks, simply by returning invalid DNS entries. You can select categories to block (gambling, porn, etc).
The free version seems to work well for light usage.
Of course, the kicker is when the users know an IP address for a website or proxy and completely bypass it.
For a total content filtering solution, you could look into some open source utilities for Linux to do content filtering. I use Squid and DansGuardian to protect a few computer labs, but it requires a dedicated Linux box to do so.
In this situation, I'd install a proxy such as Squid or Polipo on the netbook and point your browser to that. Polipo has a very small memory footprint, and don't be put off by what you might have heard about Squid being a memory hog - it's quite small with a minimal config.
The main advantage to this approach is it gives you greater flexibility with respect to what is blocked or allowed. As a trivial example, if you block based on URL rather than just the host name, so you could allow bighostingsite.com/content/ but block bighostingsite.com/porno/.
I've no experience with opendns. If it works then good luck to you, sir.