Here is a good answer.
"Yes, it is common for ISPs and email service providers to store your password in plain text, or a format which is easily recoverable to plain text.
The reason for this has to do with the authentication protocols used with PPP (dialup and DSL), RADIUS (dialup, 802.1x, etc.) and POP (email), among others.
The tradeoff here is that if the passwords are one-way hashed in the ISP's database, then the only authentication protocols that can be used are those that transmit the password over the wire in plain text. But if the ISP stores the actual password, then more secure authentication protocols can be used.
For instance PPP or RADIUS authentication might use CHAP, which secures the authentication data in transit, but requires a plain text password to be stored by the ISP. Similarly with the APOP extension to POP3.
Also, all of the various services which an ISP offers all use different protocols, and the only clean way to have them all authenticate to the same database is to keep the password in plain text.
This doesn't address the issues of who among the ISP's staff has access to the database, and how well it is secured, though. You still should ask hard questions about those.
As you've probably learned by now, though, it's almost unheard of for an ISP's database to be compromised, while it's all too common for individual users to be compromised. You have risk either way."
You should not have made this post. Go ask an IT buddy to explain the security cert system to you. You'll know you understand when you realize the system is mostly bullshit bordering on security theatre.
edit :: cannot reproduce issue on any of my test beds, the site only pops up with a cert issue in a dirty old sandbox via VPC, and that sandbox OS is win2ksp4 and is seriously out of date. Yo root certs be out of date niggaaaaa. Or some middleman is stealing your cheese. Or my sandbox and you are infected with the same thing (the file is called DirtySandbox.vhd so I assume I put some horrors loose in there years ago)
edit 2 :: Double check your machine is set to the correct date.
Hi Everyone,
I work for WTFast, and I figured I'd make a post here kind of explaining our side of things. I'd be willing to provide proof to the mods of my identity.
This all started when we get our product Steam and we noticed we had 44 negative reviews most of which we believed were not honest as they claimed:
-That they got a ping of over 900 ms, our ping meter does not go that high. And in game ping meters are broken by WTFast as it obfuscates the user's IP from the game server, and it is a SOCKS proxy. And you cannot send pings through a SOCKS proxy this is well documented.
Depending on how the in game ping meter works it's either not going to change or show the ping between the game and the proxy. So the only conditions in which you could get an in game ping that high would be to deliberately choose the farthest proxy from the game server.
-They were VAC banned, we have to date not received any evidence of any user experiencing a VAC ban due to WTFast.
-Users posted reviews about attempting to use WTFast with games we do not support at the moment such as Battlefield 3 and Civ 5.
That being said the decision to offer a free month of basic time for a positive review, to counteract reviews we believed to troll comments, was one made in haste. We have a large community who love and use our product, and we hope this has not damaged their perception of us. We've issued an apology for this on our Steam page and we hope we can accommodate everyone as we adjust to user feedback.
Best regards,
The WTFast team.
From a similar server fault question:
>bring a sound level meter and the OSHA guidelines, and show them that they are providing an unsafe work environment. This would require them to perform monitoring and sound control, supply affected employees with proper equipment for such a working environment, hold occasional training on sound exposure, etc, etc, etc. The cost would be much greater to support than providing a work area outside the server room.
Just a warning - while this solves the issue at hand it may create other undesirable issues. It's always a good idea to use your ISPs DNS servers because often there are specific entries for CDNs/streaming services to ensure you connect to the closest / fastest servers.
Ideally what you want to do is configure your router so that it only forwards the blocked sites for dns resolution via 8.8.8.8 (dnsmasq can do this). You can also do this locally on OSX with scutil.
Instructions and discussion here: http://serverfault.com/questions/391914/is-there-a-way-to-use-a-specific-dns-for-a-specific-domain
Just incase the traffic is SMB, I posted this in another thread.
> SMB is notorious for slow transfers over high latency links (30ms can be considered high in this case). The protocol wasn't designed for this and so doesn't lend itself well to speed over WAN links. Its "chatty" and over higher latency links thats never a good thing. "As an unscientific example, it took 49 packets to transfer a 1KB file via FTP and 196 to transfer the same file via SMB." - http://serverfault.com/questions/322641/how-much-throughput-should-i-expect-to-lose-over-a-vpn-connection
What's happening here is libc's readdir() method uses the system call getdents() with a very small buffer size. This causes extremely poor performance on ext3 in a directory with lots of files. In fact, if you try to strace rm -rf directory_with_a_million_files
, what you'll see is that calls to unlink() are extremely quick, while calls to readdir() pause for several seconds every 8-10 unlinks().
Unfortunately, it's very hard to get around this with any common tool, because whatever tool you try to use - rm, find, ls, perl, python, ruby, java, whatever - is behind the scenes making calls to libc's readdir() method, which is fundamentally slow on large directories.
The solution is to write a custom C application which deletes the files with direct system calls, rather than relying on the abstraction from libc.
For a much more in-depth discussion of this, including sample code, please see this (lowest-rated, but absolutely spot-on) comment at ServerFault.
As others suspect this is some kind of SEO malware that has hit quite some people already:
Since you don't know what could have already been downloaded/executed/deleted again, the sensible choice is to nuke the server, reload from backup, fix the security, and only connect it to the internet again afterwards. Here's a more detailed version regarding how and why to do this.
>When you delete (rm) /dev/null, any programs/scripts that are running and that needs to ">/dev/null" or equivalent will re-create a new (regular) file with that name. And those can spawn anytime (and some may also continously write to it)
Source: http://serverfault.com/a/551644
Edit: cc /u/jgan96
You do need to clarify your question a bit.
Is what you want, to take a windows instance that is installed in dual boot and run that as a virtual machine while you use the Linux system?
If that is the question then the answer is actually yes, that is possible albeit a bit dangerous if you are not careful about how you use it.
Look at the answer to the question posted here, for a simple tutorial on how to do so.
Not exactly; I've read that many SSDs actually do have unusually high failure rates. On paper, they should last longer because of no moving parts. But in reality, I've heard some people's not lasting for more than a year, usually due to poor I/O management on the OS' part.
Honestly, the only thing SSDs have going for them at this moment is speed. And holy shit are they fast. But, from my research, they aren't notably more reliable (but neither are they less reliable), their longevity is not notably longer (the chemicals wear out after a while, they certainly wont last until you die), and their price is way too high for widespread adoption at this moment.
EDIT: Ok, here are some links: http://serverfault.com/questions/14189/reliability-of-ssd-drives (Several people's personal experiences which can be summed up as that you'll get very roughly the same number of IOs on both hard drives and SSDs, but because SSDs can handle many more IOOPS, they might fail sooner). http://forums.storagereview.com/index.php/topic/29329-ssd-failure-rates-compared-to-hard-drives/ (An article comparing published failure rates from the manufacturer, and it shows that SSDs are very similar to normal hard drives, though on average slightly less likely to fail). http://www.pcworld.com/businesscenter/article/213442/solid_state_drives_no_better_than_others_survey_says.html (According to a french hardware survey, SSDs are just as likely to fail as normal Hard drives).
Alot of it is anecdotal, but that's basically all we have right now since SSDs are just entering the mainstream. I would still HIGHLY recommend someone to buy one, but you certainly shouldnt just say that they "should be" more reliable than a traditional HDD without some actual experiences to back it up.
I had 4GB. My hdd was swapping all the time. Saving would take somewhere between 2 and 6 minutes (i timed it). I bought 8GB more (so 12 total): the game never lagged again. Ever. Loading would take one third of the time and going back and forth from the game to the desktop became instant. Right after the game finished loading it was immediately playable, while it needed more than 1 minute to finally start running smoothly prior to the upgrade, it felt like a petrol engine having to warm up before running smoothly...
Do yourself a favor and buy all the ram your motherboard supports!
Also, i have a Crosshair Formula III with Phenom 2 X4 CPU @3.4GHz, not very recent stuff. Check your hdd swap file size while playing (http://serverfault.com/questions/399855/how-much-swap-is-being-used-on-windows) and it will be probably very high.
Questo il thread originale su ServerFault.
A parere di molti si tratta di una trollata.
>Devs don't need admin access for anything apart from installing software. They shouldn't need to install software.
I mean, that's just not true.
I understand your perspective, and I agree that having admin access can be a security risk, but if you think that devs are as productive without it, you're mistaken.
Edit: I put this in a comment below, but I figured I'd copy it here in case people wanted to read more on the subject.
Here are some highly upvoted answers on the topic from some Stack Exchange sites.
For reasons of security, it is preferred that root cannot be logged into directly. One should always login and then elevate to root as needed.
http://serverfault.com/questions/152280/why-shouldnt-root-be-allowed-to-login-via-ssh
This may answer your question:
http://serverfault.com/questions/306345/certification-authority-root-certificate-expiry-and-renewal
Technically it can be done. I am not aware of any authority that would mandate otherwise.
You should never assume that dding /dev/urandom over a file or using shred
really overwrites the data, except you know exactly how your hardware and file system works.
This will only work if you use a HDD with a traditional non-COW filesystem.
If you use an SSD, some other flash-based storage and/or a modern filesystem like btrfs or zfs, you never know where the data you want to write will land on the disk. (Note that all these circumstances are becoming more and more common.)
The only really secure method to delete a file that works independently of filesystem and hardware configuration is to encrypt everything in the first place and then delete the encryption key. Or, if you trust your hardware manufacturer, use the ATA secure erase functionality inside you disk, which deletes everything on it.
Sources:
* http://serverfault.com/a/201859
* http://unix.stackexchange.com/questions/62345/securely-delete-files-on-btrfs-filesystem
http://serverfault.com/questions/3765/what-good-system-administrator-podcasts-are-out-there
RunAs Radio Security Now! Crypto-Gram Security Podcast Hak5 VMWare VMTN Windows Weekly PaulDotCom Security FeatherCast Packet Pushers Podcast Floss Weekly Mind of Root Radio Free Security IT Idiots TechNet Edge Network Security Podcast Webpulp.tv
I work in Web hosting and this happens quite often (however we're a CentOS shop). I would not advise you to nuke your machine and start from fresh.
I would recommend you track down where the spam is being sent from.
If you can get view the headers of the spam email then you will be able to check if it's coming from an external IP address or if it's from a script on your box. You can use this command to check the headers: postcat -vq (message-id)
You can also look at this: http://serverfault.com/questions/667268/postfix-2-9-or-sendmail-outgoing-spam-prevention-check-if-sender-exists
BTW when I say that this happens quite often it is normally a hacked WordPress or Joomla. We generally tell our clients to update their blog (s), plugins, and themes. Then we tell them to update their WordPress username/pass and remove any unused plugins, and themes.
The state is used to prevent certain nefarious actions. For example the first rule you have will let through an Ack packet for a connection that doesn't exist. However in the second case that ACK packet gets blocked because the firewall hasn't seen the SYN packet and resulting ACK packet opening a connection.
from a performance perspective this means that everything EXCEPT a SYN packet is allowed through based on the first rule in your iptables filter. The less rules it goes through the faster it gets in. http://serverfault.com/questions/578730/when-using-iptables-firewall-rules-why-assert-new-state-on-all-allowed-ports
from a security perspective there are a few TCP hacks that can cause a service to fall over. Things like sending TCP packets with various inappropriate control bits set. Most OS's should have this fixed by now, but if not...
First, echo-reply is only a subset of all of ICMP so if you are blocking ICMP altogether you are causing more issues. Responding to echo requests is a useful diagnostic tool. If you're worried at all, you could rate-limit the echo-replies or limit them to certain hosts (such as uptime robot).
GRC/Steve Gibson is stuck in 1998 and gives out-of-date advice.
Use WMI Filtering to apply it to a specific computer
*Edit: Here dude, this link is way better Skip to "creating your WMI query", then look at:
Select * from Win32_Battery where BatteryStatus <> 0
This will be your WMI filter. It will say "hey man, you got a battery? Yeah? Have some GPO! Oh you don't have one? No soup for you"
Edit #2: As /u/the_spad stated you will need loopback in addition to this if the settings are in User Config.
I'd like to chime in with a few things (mistakes!) I have made over the years (Some personal :( and some others have made)
Raid controllers. If there is no reason to use hardware raid, don't use it. It will save you headaches if you lose a mobo/raid controller. Also, most controllers on consumer gear aren't even proper raid. (http://serverfault.com/questions/9244/how-do-i-differentiate-fake-raid-from-real-raid)
Disks bought at the same time are more likely to fail around the same time as one another. I'm not kidding! On 2 occasions I've had identical drives fail < 2 days of one another! It sucks even more when it fails at the same time as syncing to the new disk!
Most OS's can monitor the S.M.A.R.T. status of the disk. In Linux, you can use smartmontools to keep tabs on your disk and shoot you an email when something goes astray. For example, if the temp of my disk goes above a certain threshold, I'll get an email.
Check your backups! Make sure they're actually working. Just because one day it is working, doesn't mean it will be a week later.
Last, but not least, on the topic of backups, keep them somewhere separate. You have a fire in the room where all of your gear is, at least it will (hopefully!) be safe.
FYI, Hard disks are not intended for long term media storage and the magnetic bits will decay after a few years of inactivity and the moving parts can sieze if not operated. This is a bad plan if you intend for any of that data to be accessible when you go to look at it...
This has some good info http://serverfault.com/questions/51851/does-an-unplugged-hard-drive-used-for-data-archival-deteriorate
In class, cant type full response so here a half response in the form of a Stack Exchange copy pasta:
Thanks to tomjedrz on stack exchange,
Link to Answer: http://serverfault.com/a/13065/
1- If you are using Exchange for email, then AD is required. You likely are not using Exchange or you would know that, but I include it for those who may be considering this.
2- AD manages a "centralized authentication" system. You control users, groups, and passwords in a single place. If you don't have AD, you will likely have to setup your users separately on each terminal server, or have a generic user on each for access and use security in the application.
3- If you have other Windows servers, AD allows for straight-forward securing of resources on those servers in a single place (AD).
4- AD includes some other services (DNS, DHCP) which otherwise have to be managed separately. I suspect you may not be using them if the only Windows servers you have are the terminal servers.
5- Although not required, there is benefit to having the workstations in the domain. This allows for some (not comprehensive) single sign-on capabilities as well as significant control and management of the workstations through "group policies". --> For instance, through GP you can control the screen saver settings, requiring that the screen saver lock the workstation after x minutes and requiring the password to unlock.
basically it makes managing hundreds of computers and thousands of users easier by orders of magnitude.
Copy Pasta from Michael Stum that I read a while ago
>No. A 500 Watt Power Supply can DELIVER 500 Watts, but it will ever use only as much as the components in your PC need (and of course that depends on Load and Activity, if Energy Savings Mechanisms like AMD's Cool'n'Quiet or Intel's SpeedStep is enabled etc.). In Theory, with a 100% efficiency rating, which is impossible. The usual Efficiency rating lies around 80%, but it can vary greatly between low quality and proper power supplies. So with 80% efficiency, your power supply will use as much power as your components need and then about 20% extra. Another caveat: Optimal efficiency is only reached at a "proper" load. If you have a 500 Watt Power Supply but then a super-low-consumption PC that only consumes 80 Watt, you're not going to reach 80% efficiency and could easily use ~120 Watt (~50% efficiency). Due to the ~80% efficiency, you can also not use 500 Watt out of a 500 Watt Power Supply. Those numbers are all estimates, as PSEs vary greatly, but a rule of thumb is that you should get a PSU with at least 80% Efficiency and get one that is not too big (but not too small either) for your PC.
Edit: format
Edit again: Link to the thread. There's a lot of info in that thread
https://www.reddit.com/r/DataHoarder/ would be a good place to ask for advices as well.
BTW: usable space would be around 37TiB, accounting for performance decrease when array about 85% full. Which can be worked around at the price of RAM usage.
I think it does...
There is some discussions about it here:
http://serverfault.com/questions/368066/rsync-for-windows-with-support-for-acls
and here:
http://superuser.com/questions/69620/rsync-file-permissions-on-windows
I have to admit that I rarely work with windows ACLs, but the -AX flags on rsyncs preserves ACLs and XATTRS in the filesystem. In cygwin I think that this works on NTFS. The worst that could happen is that you could lab it out and give it a try.
When you query the whois server you are not asking for "microsoft.com" (equal to ^ m i c r o s o f t \ . c o m $ if we talk regexp lingo) but rather "* m i c r o s o f t . c o m *" (or in this particular case "m i c r o s o f t . c o m *").
The proper way of asking whois is to use this syntax:
whois 'domain microsoft.com'
For more information see here http://serverfault.com/questions/122228/how-do-i-do-an-exact-whois-search
sysdig -c topprocs_net
And the rest is here: https://github.com/draios/sysdig/wiki/Sysdig%20Examples
Or with iptables: http://serverfault.com/questions/365048/measuring-cumulative-network-statistics-per-user-or-per-process
Note that there are TWO issues that comprise the "shellshock" vulnerability. The one described in OP's link is the first (and probably most critical) one. The other is a more subtle vulnerability due to the way that the bash function parser works. This post is a great explanation of that second bug:
I've always used hyper-v because I didn't want to shell out the money for Workstation Pro. Now I have money, and I'm looking for a reason to make the switch, but I'm not finding significant advantages of going with VMware.
Could you provide me with some insight as to how Workstation Pro > Hyper-V?
EDIT: After further researching, it looks like Hyper-V is a Type 1 hypervisor no matter what.
Another link that discusses it, "When you install Hyper-V you think it's being installed on top of a host OS but it is not. The setup converts the original OS in something like a VM and puts the hypervisor below. This is what is called the root or parent partition of Hyper-V. That's why you experience the same speed in what you see as the "real machine" and the virtual machines."
Gunicorn needs a web/http server infront to keep slow requests from blocking it + serving static files + ssl + other stuff: http://serverfault.com/questions/220046/why-is-setting-nginx-as-a-reverse-proxy-a-good-idea
Oh it's definitely real, take a look at the second answer in this: http://serverfault.com/questions/322747/can-a-long-etc-hosts-file-slow-dns-lookup . The sources note a hosts file of >100 KB could cause problems, the linked one is 125.9 KB.
Personal anecdote: I installed the hosts file that was linked something like a year ago and forgot about it. Only a couple months ago I took note of how my antivirus notified me about a "potentially hijacked hosts file" or something and I remembered I had installed a custom one. At that point I got kinda suspicious of its usefulness since I had completely forgotten it was even installed, and took time to research the potential issues it could cause. After doing that I got rid of it and I'm really glad I did because it made a bunch of weird issues I had been having with a few online videogames go away (mostly related to the server browsers). It also seemed like it sped up web browsing but I can't say for sure, so take that with a grain of salt.
tl;dr it probably causes problems, a browser-based ad-blocker gets rid of 99.9% of ads in my computer anyways so I'm not too concerned. YMMV.
EDIT: Okay now I remember one of the specific problems I had with the hosts file: it breaks the Steam Overlay web browser for certain sites (e.g. reddit, nvidia). The pages seem to load fine but shortly after they finish loading you get an error and an empty page.
A wildcard certificate only verifies a single level (eg, dot) of subdomains. So if you have several mixes such as x.y.z.example.tld you will need a certificate for each one of them.
http://serverfault.com/questions/104160/wildcard-ssl-certificate-for-second-level-subdomain
Thanks for the warm welcome,
I have been working at this over the past couple of weeks. I have done plenty of troubleshooting and debugging, and as of yet have been unable to place my particular issue. I have found people with similar issues to mine : here and here
Neither of which have had real solutions.
I am assuming this is a firewall issue, but, as we have made similar changes to a third site. I am unable to see what is wrong.
We have both the old and new subnets on the same VLAN and, as I said they communicate fine. I was not asking for a hand holding session, I was asking for some advice.
... all you have to do is add a custom CA. We could make a script that creates and adds a local CA, and then starts a fake server on localhost for lds.org to point to. we could even make it listen on a randomly-generated 127.x.x.x address, so that 127.0.0.1 and localhost get connection refused >:D
Then, we just need to get someone who is about to go on a mission to scrape the mission-confirmation page[s] and email[s], share it with the group, and modify it just slightly to say what we want.
Wait, back up. We could go even more all-out by MITMing lds.org from the local machine - that is, instead of serving static content from the local webserver, serve modified-proxied content. Then if they try to change the design on us, they can't! We'd have to go way our of our way to catch all the ways they could put text into the page ...
...actually, what happens if they put the confirmation in a PDF or PNG or other image format? then we're really screwed. Perhaps serve dynamic content if possible, static content if the mission confirmation address isn't in the page.
But then what if they put a decoy in the page? or for that matter even change the page url? I guess you'd really just have to deal with the fact that we'd have to use statically grabbed stuff.
I guess it'd be good to have the escaping exmo to check the real confirmation page before checking the fake one, to verify that they look the same. That's the only real way to get around the danger.
If we're going to get serious about this, I propose we start an anonymous IRC about this. Perhaps on TOR?
(note: because of my TOR suggestion, DO NOT REPLY TO ME WITH YOUR REAL ACCOUNT, at least not in public. if you want to PM me, PM me.)
The gizmodo article is terrible, not really explaining anything, and getting the atomic clocks and leap second concept completely wrong.
Google does use clock skewing in their internal network to hide the leap second. Most software does not. Either they have a minute with 61 seconds (23:59:59, 23:59:60, 00:00:00), or they repeat the last second (23:59:59, 23:59:59, 00:00:00). Both of these trigger bugs in software that expects 1 minute = 60 seconds, and "time never goes backwards".
Well, as I mentioned on Serverfault
Sysadmins are inherently dangerous. They've got technical expertise, a high level of privilege, and they pretty much only interact with systems in anomalous states.
That's an easy route to monumental screwups - malice, ignorance or carelessness can be catastrophic.
But it's frustrating to have power 'taken away' as well sometimes, especially if it's something you're familiar and confident with. A lot of sysadmin stuff is pretty simple, it's just when you get the emergent dynamics of a larger enterprise - that's when you get complications.
Here is the original thread on ServerFault
http://serverfault.com/questions/769357/recovering-from-a-rm-rf
>I swapped if and of while doing dd. What to do now? – bleemboy
Not only did he delete everything he doubled down.
As soon as I realized that its sourced from ServerFault I continued reading there instead.
Edit: Updated link as the original ServerFault topic was removed
TL;DR: Folder Redirection points to the files on a network share, Roaming Profiles copies all of the files to the local machine.
http://serverfault.com/questions/465511/roaming-profile-vs-folder-redirection
What a beginner should know/learn for sysadmin job? (ServerFault)
Three Books Every System Administrator Should Read (Linux Mag)
/r/sysadmin Recommended Readings (reddit)
A competent basic knowledge of most of the topics listed above and overall the ability to balance your technical ability with business common-sense.
Are you sure that all that memory is really used by applications? Modern operating systems implement aggressive caching mechanisms which use all available RAM to cache applications and file data to speed up disc I/O operations. Take a look at Memory tab in Resource monitor (press hotkey Win+R -> type "resmon" w/o quotes -> press Enter). How high are "In use" and "Standby" values? If "Standby" is much larger than "In use", you have no reason to worry, everything is working as it should.
Here is an example and more verbose explanation: http://serverfault.com/questions/565539/huge-amount-of-standby-memory-in-resource-monitor
However, if the situation is opposite, there is some problem. Let us know and we'll dig a bit deeper.
Here's an old ServerFault post from a Gunicorn developer that toots the Gunicorn horn a bit. I've never actually used uwsgi - the first Django app I deployed was on a gunicorn/nginx setup and I never looked back.
> Also I think a lot of people were getting tired of the PP team managing to fuck something up in some way every episode, so to have it happen via this literally impossible circumstance is even more frustrating.
Deleting large files can take a long time on most Linux filesystems: http://serverfault.com/questions/425162/why-does-deleting-a-big-file-take-longer
You'll have to take my word on this, but it's entirely possible for network file systems like FTP to allow file operations to be processed in parallel.
Together, its entirely feasible for someone to wedge a remote system by issuing hundreds/thousands of batched operations.
And FYI, the "middle-out" compression algo. is literally mathematically impossible. I had to turn my brain off before the first episode was over. Still a great show and I love it!
I would expect that kind of confusion from a layman, but from someone actively working in the field where this stuff matters? Scary.
That being said I've always just known that x86 was 32 bit, though I never knew the reason, after looking it up it does make a lot of sense, it just doesn't have much relevance anymore.
Access violation means that the DLL is trying to access memory out of its allocated bounds. That should never happen, regardless of how IIS is configured. Sorry, I can't help you more than that - looking up what ntdll.dll actually does might give you more clues?
Have you tried posting on http://serverfault.com/ or http://stackoverflow.com/?
At least in terms of stuff like MySQL on Linux, if you specify localhost (such as connecting a PHP script running on the same box to the database housed on that server) it will connect through a Unix socket and not TCP. If you change to 127.0.0.1 it will use regular sockets. Edge case but good example.
Further reading: http://serverfault.com/questions/337818/how-to-force-mysql-to-connect-by-tcp-instead-of-a-unix-socket
I actually do something where I use nginx together with a background python process :)
I use the proxy_pass directive to forward requests matching a certain url part to a python process listening with a very lightweight http server of its own.
Part of my nginx config looks like this (the python process listens on port 2345):
location /your-sub-url/ {
proxy_pass http://localhost:2345;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Also have a look at this, which helped me for some use cases: http://serverfault.com/questions/562756/how-to-remove-the-path-with-an-nginx-proxy-pass
Hope it helps!
Nginx is a web server and gunicorn is an application server. Usually nginx works as a proxy to the application server, but serves static and media files. You need something that executes Python but Python isn't the best at handling all types of requests.
You can reduce the memory usage by reducing the worker_processes
in config/unicorn.rb
and by doing this: http://serverfault.com/a/617954
But in the end... Man I loathe Ruby and Rails and especially sidekiq for stuff like this. It represents everything I utterly hate about "modern" development. Shitty practices and all. When people tell me they restart their node.js or RoR server daily or even weekly and literally don't see anything wrong with that I'm always mind-murdering them. Maybe it's just - maybe I'm in the wrong... but seriously?!
I actually have ServerName and Alias set up. I'll set up a vhost for zaieurope and redirect it.
Edit: found a good solution here: http://serverfault.com/questions/120488/redirect-url-within-apache-virtualhost
Now using this:
<VirtualHost *> ServerName www.example.com Redirect 301 / http://example.com/ </VirtualHost>
> ssh server
The most recommended windows SSH server on serverfault is $99.
Worth considering. Damn I wish Windows did SFTP and SSH out of the box. I mean come on.
The only time I've seen this is where the system needs random bytes for the entropy generator. Mouse movements help this.
http://serverfault.com/questions/214605/gpg-not-enough-entropy
Back in the day when I used spybot: search and destroy's immunise feature, some people recommended only blocking in your browser and not hosts because it did cause a slow down in Windows XP and earlier. I have no idea if it is still a problem in Vista/7/8 or Linux.
edit: see http://www.safer-networking.org/faq/why-does-my-network-react-very-slowly-after-inserting-the-hosts-file/
and http://accs-net.com/hosts/faq.html#19
(again, no idea if it is relevant after windows XP)
an answer from 2011: http://serverfault.com/a/336525 (workarounds are suggested)
FYI; Adding '.' to PATH is a security risk.
Read more here: Link to stackoverflow
At least you could have mentioned this security risk.
For those wondering what they actually do during these down times, you might be interested in reading this.
Also I would bet money that they are making a lot of server side changes lately for the upcoming free to play thing.
Best example of Critical Stupidity in an auditor I've seen.
Some excerpts:
> I have over 10 years experience in security auditing and a full understanding of the redhat security methods, so I suggest you check your facts about what is and isn't possible. You say no company could possibly have this information but I have performed hundreds of audits where this information has been readily available. All [generic credit card processing provider] clients are required to conform with our new security policies and this audit is intended to ensure those policies have been implemented* correctly.
> "Strong cryptography only means the passwords must be encrypted while the user is inputting them but then they should be moved to a recoverable format for later use."
> I see no data protection issues for these requests, data protection only applies to consumers not businesses so there should be no issues with this information.
> I read in detail through those responses and your original post, the responders all need to get their facts right. I have been in this industry longer than anyone on that site, getting a list of user account passwords is incredibly basic, it should be one of the first things you do when learning how to secure your system and is essential to the operation of any secure server. If you genuinely lack the skills to do something this simple I'm going to assume you do not have PCI installed on your servers as being able to recover this information is a basic requirement of the software. When dealing with something such as security you should not be asking these questions on a public forum if you have no basic knowledge of how it works.
> all of those links have comments on them along the lines of "this is an unacceptable work around you should not run as an administrator"
I absolutely noticed that, and I don't disagree with the sentiment. In general, I think it's correct that apps shouldn't need admin access. Nevertheless, sometimes they do, as is evidenced by my previous comment.
In those cases, it's just not feasible to put in a support ticket detailing the exact circumstance and then wait several hours or possibly even days for a support tech to action the ticket. Especially considering it's entirely likely that another admin request may need to be made within an hour of the first one be completed. It just doesn't work.
I'd also like to point out that I'm not alone here. E.g., here are some highly upvoted answers on the topic from some Stack Exchange sites.
Bash on Windows (the Windows kernel implementing Linux kernel system calls) is a distinct feature from Windows containers, although they could theoretically reuse some of it to run Linux containers on Windows. Microsoft is supporting native Windows containers in Windows Server 2016, meaning that Windows applications can run in containers, nothing to do with Linux.
Relevant (unanswered) SE: http://serverfault.com/questions/767994/can-you-run-docker-natively-on-the-new-windows-10-ubuntu-bash-userspace
A quick search brought me to Why doesn't “timedatectl list-timezones” list ALL timezones?, and a comment there pointed out that timedatectl list-timezones
uses the zone.tab
file, which lead me to Why is zone.tab missing so many time zones?.
Pulled this from here. Assuming your talking about ISC DHCPd, you need to define classes then you can assign pools for certain classes.
class "kvm" { match if binary-to-ascii(16,8,":",substring(hardware, 1, 2)) = "56:11"; }
class "local" { match if binary-to-ascii(16,8,":",substring(hardware, 1, 2)) = "52:54"; }
host meme { fixed-address 10.1.0.254; }
host server247 { hardware ethernet 52:54:00:2f:ea:07; fixed-address 10.1.0.247; }
subnet 10.1.0.224 netmask 255.255.255.224 { option routers 10.1.0.225; pool { allow members of "kvm"; range 10.1.0.226 10.1.0.235; } pool { allow members of "local"; range 10.1.0.236 10.1.0.240; } pool { # Don't use this pool. It is really just a range to reserve # for fixed addresses defined per host, above. allow known-clients; range 10.1.0.241 10.1.0.253; } }
It's a weird coincidence following the "hoax" with the Serverfault thread.
For reference - Hoax followup thread - https://www.reddit.com/r/linux/comments/4f5f3n/remember_that_guy_who_deleted_his_whole_company/?ref=share&ref_source=link
Original Reddit thread - https://www.reddit.com/r/linux/comments/4er8gk/man_accidentally_deletes_his_entire_company_with/
Original Serverfault thread - http://serverfault.com/questions/587102/monday-morning-mistake-sudo-rm-rf-no-preserve-root
Oh geez. So now HP is getting onboard the Cisco terminology train. A little late, HP.
http://serverfault.com/questions/567268/configuring-vlans-on-hp-1910-switch
Looks like hybrid may allow multiple untagged VLANs on a trunk?
Make it trunk. Tag vlan 2 and 3.
If management interface is a separate physical interface, you're done. If management is on the same port you need to look at controller documentation to see if you should tag or untag vlan 1 on that same link.
It's odd to me that you're running a Cisco WCS but you've got rubbish v1900 series switching. I know this wasn't a solicited opinion so no need to respond to it. I don't hate HP, but throw me at least a 2530/2920 series switch with a proper CLI.
He knows when you've been good or bad..
More serious answer: It sends an email to the root user (or tries) and logs it. See this.
This some info I gathered after a lively discussion with my IT Director regarding network physical segregation vs logical segregation:
Cisco - VLAN Security White Paper - Virtual LANs
http://www.cisco.com/en/US/products/hw/switches/ps708/products_white_paper09186a008013159f.shtml
“The simple observation that can be made at this point is that if a packet's VLAN identification cannot be altered after transmission from its source and is consistently preserved from end to end, then VLAN-based security is no less reliable than physical security.”
Hakipedia.com - VLAN Hopping - Mitigation
http://hakipedia.com/index.php/VLAN_Hopping
“The mitigation of VLAN hopping attacks requires a number of changes to the VLAN configuration. Start by using dedicated VLAN IDs for all trunking ports on a switch, and move all interfaces out of VLAN 1. In addition, it is advisable to disable any unused switch ports and move them to a VLAN that is not being used. Explicitly disable DTP on all user ports to set them to non-trunking mode and/or force it to be an access port. To do this on a cisco switch, use the switchport nonegotiate and switchport mode access interface configuration commands.”
Serverfault.com - Why do people tell me not to use VLANs for security? – mfinni’s answer
http://serverfault.com/questions/220442/why-do-people-tell-me-not-to-use-vlans-for-security
“I seem to recall that, in the past, it was easier to do VLAN hopping, so that may be why "people" are saying this. But, why don't you ask the "people" for the reasons? We can only guess why they told you that. I do know that HIPAA and PCI auditors are OK with VLANs for security.”
I was scared about the same thing, but I did a little research. As far as electronics, I was reading that a certain level of humidity (48%) is optimal for electronics. Dry air is more likely to short/ruin your electronics while humidity can actually help them run faster. So long as you monitor the humidity level and don't perch your humidifier right next to your electronics, it would theoretically be fine.
http://serverfault.com/questions/6000/ideal-humidity-for-a-server-room
Ah wait found something interesting, gotta read this.. http://serverfault.com/questions/456090/ubuntu-12-04-hp-proliant-dl380-g4-load-maxes-out-unresponsive
Edit: So this pretty much seems like my problem. And looking through the comments it seems like his problem was solved through that.
Now, how would I go about installing this? ELI5 please, since last time I tried installing some missing firmware I tried for a month and failed brutally.
Also. FUCK YEAH. If this works, I would be terribly terribly happy. (This is an anxiety bomb for me.)
Trust me, I've been trying to google this shit day in and day out, asking you guys was my last resort and now on a whim I've found this that may work!
Though it is important to be clear that serverfault is not really a 'forum'. It is a Q&A site and the serverfault community is pretty strict about the class of questions that will get accepted. Please read the SF Faq before posting.
grep -e '[^\ ]{8,}' YourWordList.txt > NewWordList.txt
That should create a wordlist from your current wordlist with only the words greater than 8 chars long.
Edit: source: http://serverfault.com/questions/107958/grep-to-find-files-that-contain-a-string-greater-than-x-characters-long
Edit Edit: Also, you can use john to feed your cracking program of choice a wordlist (usually.. I know you can for aircrack-ng at least), and you can tell john to skip the words under 8 chars, and you can also use john's word mangling to expand on your password list. I don't have a link off hand, but if you're interested; google and such.
Sorry it isn't a true blog post or howto but it should get you started. One note not mentioned, you should use the Virtualbox download from there website not from the PPA that is the open source version and can cause issues.
Keep in mind you are going to be booting your windows partition as a vm so you run the risk of completely destroying said partition if you do something wrong.
That's true about btrfs. Fedora were planning on making it the default filesystem, but decided to do it in a later release due to stability issues.
Also, I don't think btrfs will support all of the features ZFS does, so feature-wise, it won't be as good.
http://serverfault.com/questions/127858/how-does-btrfs-compare-to-zfs
First, this is not something that either cmd.exe or BASH (or whatever shell you like) is doing. TAIL is a program. It's like saying that BASH can't run sfc.exe.
Second, PowerShell can "tail" a file (kinda):
Get-Content c:\logfile.log -wait
This is basically the equivalent of "tail -f".
Now granted, it's not as flexible or robust, but still pretty awesome nonetheless. And if you want to have some real TAIL style functionality in PowerShell, install BareTail.exe, and run it from PowerShell (or one of these other Win32 TAIL implementations: http://serverfault.com/questions/7263/convenient-windows-equivalent-to-tail-f-logfile).
EDIT: To be fair, the -wait parameter is not documented in the Get-Content help files. But you can find it with this:
Get-Command Get-Content | fl
Which doesn't rely on documentation, but on reflection over the cmdlet code.
There is a powershell command called "Copy-Acl" that will allow you to copy AD permissions from one group to another. Essentially, you can copy the permissions from group y to group x, delete group y and then add group x. See here: http://serverfault.com/questions/480802/copying-ou-permissions-from-an-existing-security-group-to-a-new-security-group.
I prefer the use of both hands in a twist-like motion.
A .0 is a valid host and so is .255, if the netmask calls for it, it entirely depends on the known routing tables, which can vary over time.
https://labs.ripe.net/Members/stephane_bortzmeyer/all-ip-addresses-are-equal-dot-zero-addresses-are-less-equal http://serverfault.com/questions/10985/is-x-y-z-0-a-valid-ip-address
106.13.0.0 is part of the Baidu /15 allocation. /15 is the size of the current allocation. It could be changed at any time by APNIC if the registration required it.
> inetnum: 106.12.0.0 - 106.13.255.255
Punch it in here: https://wq.apnic.net/apnic-bin/whois.pl
Let's pretend the egress traffic did get dropped. What about the ingress (return path)?
Edit: Added RIPE link for slightly more credibility vs. serverfault. And to be fair, I think /u/exaltedgod is reasonably using knowledge of /24 and probably best practices but I do believe this traffic is routable.
It seems like you only have a single instance on AWS -- so why the desire to add VPN to protect SSH? Your just replacing one authentication method with another. Just harden SSH...
In an ideal world you have a static IP and could just limit SSH to your static IP but you should still probably harden SSH anyways.
VPNs have their place; not questioning that but for a single ec2 instance it is just overkill. How will you setup/manage the VPN server -- probably over SSH so your back to square one.
If you aren't connecting frequently you could selectively open/close 22 as needed to the current IP.
As an aside; don't forget to harden your AWS credentials.
There's a modification to TronScript that uses WSUSoffline actually. I prefer to use the online update, though.
As a side note, the scripts here are the best way to update Windows via Windows Update I've seen so far.
http://serverfault.com/questions/320750/best-way-to-fully-update-a-new-installed-windows
And a static site would have been able to push more with less.
Edit: The next time Facebook Chat or what ever the latest 'chat' fad is starts to lag consider this old post on "How much CPU does IRC use"
> I used to run ircu on a pentium 100 with 48MB of memory some years ago. It was running well with thousands :of users.
So lets break down the numbers on your test case:
>> 500k pageviews on a medium instance on AWS,
vCPU CPU Credits / hour Mem (GiB) Storage t2.medium 2 24 4 EBS-Only
From: https://www.rootusers.com/linux-web-server-performance-benchmark-2016-results/
A well designed static website should have at most 3 hits: HTML, CSS & JS. If you have images then that's a hit each. So 10k pages per second.
You could have your 500k pages served in 50 seconds on 500 MB of RAM and a single core Xeon.
a few learning links:
https://www.reddit.com/r/sysadmin/wiki/index
explore around there a bit
installling it: http://www.thegeekstuff.com/2014/11/install-active-directory/
Looks at request...looks at username....looks at request...sighs...shakes head... :-)
I'm guessing the budget for this is somewhere in the 5 zeros range? As in £/$00000? ;-)
I assume pfSense will be running as a VM on the ESXi host? And you want to be able to directly connect the the ESXi host over the internet incase pfsense goes down and you need to troubleshoot?
Have you seen the ESXi hardening guide? It's geared towards making your internal infrastructure more secure, but doesn't hurt to see how much, if any, applies to your situation:
http://www.vmware.com/uk/security/hardening-guides
I would suggest this is not a use case that has been designed for so you may find very little official 'help' doing this, but I did find a few resources:
http://www.vladan.fr/esxi-firewall/
http://serverfault.com/questions/609747/esxi-hosted-on-public-ip-without-firewall
The last one seems promising as it describes how to restrict access to the ESXi localhost, except for SSH, and then using an SSH tunnel over the internet (to the ESXi host) to manage the host.
I would also suggest an alternative, such as running a second VM just for VPNing into the internal network to connect to ESXi (so you would have two ways, pfSense and this VPN box, in case either one fails) and configure pfsense and the VPN VM to start on ESXi startup. This way, if the system gets into an unknown state (=fucked), you can get someone onsite to fat finger the power button, resetting the system to a known state (off-n-on-again).
Alternatively, a small pfSense/VPN physical box (old desktop with two nics), just for getting to the ESXi host could be a good way to go?
Good Luck.
I assume it was a Windows server. If so, there is a log called the "event viewer" that would show all access to the server. I'm not a hacker, but my company does a lot of work on Windows computers at an admin level, and I'm familiar with these things.
A simple google search "modifying windows event logs" returned this and this which talked about a couple of commercial tools to modify the event log (e.g. to delete your log-ins and remove your tracks). If I can find this on a google search, I'm sure hackers have tools to delete their log-ins. In the end, the event log is just a file. If Windows can modify it, someone else can too. It looks like Windows has security to avoid that, but I'd be very surprised if it can't be circumvented.
The main point of security is to not let people get on the server in the first place, through firewalls, hiding yourself (not returning pings, for example), really difficult passwords. If someone can get to the Windows event log, you've already been hacked.
Of course all this doesn't prove that Clinton was or wasn't hacked. Proof that she was hacked would be huge of course, but the main problem for Clinton is that if she had not set up a private server, we wouldn't be looking to see if she was hacked. She mishandled gov't documents just by having a private server, whether the server was hacked or not.
Clinton: "OK, it was a mistake to take that car. But I didn't wreck it. And I washed it before I brought it back."
> Must contain . after the first @
That's not true, you can put an MX on a TLD and it's perfectly legal, and your program would be doing it wrong.
http://serverfault.com/questions/154991/why-do-some-tld-have-an-mx-record-on-the-zone-root-e-g-ai
> Must not end in .cmo or .ocm
Such restrictions might seem good in the short run but if they ever gets added as ccTLD (and everything seems to be added as ccTLD these days) you'll be in trouble.
Sorry for the offtopic - but if you do that, do NOT disable the rest of ICMP. It has a million other uses like source quench, TTL expired in transit (detects routing loops, and is also needed for traceroute to work), redirect gateway, fragmentation needed (needed to sanely detect PMTU black holes), etc etc.
Drop only incoming echo requests:
iptables -I INPUT -p icmp --icmp-type 8 -j DROP
ip6tables -I INPUT -p icmpv6 --icmp-type 8 -j DROP
For more info:
http://security.stackexchange.com/questions/22711/is-it-a-bad-idea-for-a-firewall-to-block-icmp http://serverfault.com/questions/84963/why-not-block-icmp/84981#84981
sssd w/ the AD backend is probably going to be easiest. The answer on this ServerFault question worked for me for FreeBSD, and it should be much the same for Linux.
If you can get a CSV or something with the information from NIS, it should be pretty easy to import it into AD with a bit of PowerShell. Good luck!
Your EC2 instance probably doesn't have any swap set up. Your OSX machine is paging RAM onto disk allowing you to have more data in memory that is in physical RAM. The EC2 instance probably doesn't have swap set up since a system and a volume on EC2 are different concepts. See this stack overflow post for how to enable it.
I/O Requests? As in, you're still using magnetic drives? The easy solution is to rebuild using GP2 aka SSD drives. They're more expensive per GB, but do not incur I/O charges. They're also generally faster, come with a burst allocation, and you're given more precise IO performance based on your drive size.
You could find out what is using resources with IOTOP. ServerFault discussion here: http://serverfault.com/questions/9428/how-can-i-monitor-hard-disk-load-on-linux
As for getting to the bottom of your requests, it could be a lot of things. Overzealous logging is often a culprit. You could be running out of ram and using swap. If you're running php, turn off stat in APC or OpCache (which is under the option validate_timestamp). I'm wildly guessing here.
Also, upgrade to T2. AWS even came out with a T2.nano which probably offers better performance than the t1 micro while costing less.
Block Windows 10
Article #1: http://serverfault.com/questions/695916/registry-key-gpo-to-disable-and-block-windows-10-upgrade
Article #2: https://techjourney.net/disable-remove-get-windows-10-upgrade-reservation-notification-system-tray-icon/
According to this article you can add the following registry entry to disable Get Windows 10:
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Gwx] "DisableGwx"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate "DisableOSUpgrade" = DWORD:1
I've tested this on several PCs and it works (for now)
>I was hoping it would be around the cost of a Dell H200 - <$80 CAD
Those cards are cheap because they're nothing, Dell manages to somehow sneak them in to corporate orders, it's one of their few real profit margins (they're sort of a customer of ours, it's complicated).
The LSI cards have intelligence and actually help with performance, but if you just need the ports you can take the H200 but an even crappier highpoint might get you there, just make sure they have the decent marvell chipset.
The dells should be cheap if only because they get stripped and ebay'd so much by corporate customers.
I know this may or may not be useful to you in particular, but it is the best explanation of how vlans work and I understood them with this. Someone may benefit too, idk
> root=/dev/md1
Looks like the system was using a MD array (Software RAID). I am not entirely sure what the proper method of cloning RAID array would be (This URL has gives you some ideas.).
I can understand the clone failing to boot but I am not sure why your original won't boot though. The error means that Linux was not able to find the root file system to mount, which generally means either wrong root= option was specified or the hardware changed and it no longer is loading the driver for the disk controller.
You might want to boot a modern distro (Ubuntu/Fedora 21 etc.) using a live CD with the original disk in place and see what is the root file system and adjust the root= parameter in kernel command line accordingly or check if you need to load a disk driver. Or better yet once booted using live cd you can mount all the file systems in the original disk, create a filesystem copy (TAR/CPIO) of all data that is important and move the backup image to another location. Then you can install a modern distro on the original disk, recreate the partitions as before and copy over the FS image for each FS that you backed up.
The OS seems to Fedora Core 6 which is ancient but you can also try your luck asking for help on fedora forums or mailing lists.
Ha, hilarious. Users are the worst...
Slightly side tangent, not sure what your environment is, but you may find this interesting.
http://serverfault.com/questions/644741/how-to-soft-restart-windows-server-10-technical-preview
> Having the page file disabled is not necessarily a bad thing
Of course not, until you run out of physical ram. Then the OS could become terribly slow, or do worse things like kill processes or crash.
edit: also having no page file is simply foolish. There are no downsides to having one: only benefits. I suggest you do some more research before going out and telling people how to manage memory: http://serverfault.com/a/23684
Devops would probably be treated as a meta tag on serverfault.
Tags get added when people use them. Do you have a question for serverfault that needs the devops tag added? If you don't have enough rep yet, and the question is a good fit on the site, and actually needs the devops tag applied I'll add it to the question for you.
Sometimes just using dd is the best option..
I would do a low-level bad-blocks check and a filesystem integrity check to be safe before-hand.
Any moderately priced CPU will do. Any moderately priced graphics card will do. You seem to be focusing on the wrong thing. She doesn't need a gaming rig, she needs a STABLE computer.
This list is not ordered:
Edit: I'm getting old. Double 8800GT's aren't really a gamers rig any more :-\ I assume you got them cheap somewhere. They're fine.