This is how I lost 15kgs in3,5 months without micro-managing, or calorie counting or doing anything specific. Just lived, ate, enjoyed life and got connected back with people from all periods of my time again. I hope it helps.
- Removed and stopped buying munchies, snacks and any calorie containing drinks.
- Implemented 2x 30 mins walks and maybe another 1 hour after work.
Llistened to the podcasts, books, video's or called family/friends during these walks. It was not lost time at all.
- Stopped taking warm showers, and wearing a jacket outside. No matter what weather. Some days I could endure 30 mins of -2C, some days couldn't endure an hour of 15C. It didn't matter. Now I enjoy the cold. Everything is too hot.
- Did some fasting in between like not eating a full day and stuff on the days I didn't feel like eating.
- Took breaks from screens after moments of conflict, hate, anger, sadness, happiness during work hours to process.
- Every time I felt hungry, I took a cup of water first and waited 15 mins to see if I was thirsty or really hungry. I ate if I still felt hungry.
Regimen above and adherence to it gave me a caloric deficit of 7000-9000 kcals per week, depending on the fasting days. Gave me a lot of pleasure and I felt better.
- Slept as much as I can and the more I could sleep, the less tired I got and everything got better. It was the real medicine for me. All above were just the means to an end.
P.S.: Just find a caloric [TDEE calculator](https://www.omnicalculator.com/health/tdee) and use Miffin-St. Jeor to calculate your average, if you wanna build a deficit target.
Usually I get trainees to install a LAMP stack:
It's a good way to get them to understand certain concepts, utilities, services and packages. It certainly also lets them explore how it all works together to provide a solution and see how bash / the cli works.
It's not too difficult, and can be quite fun.
I'm also suggesting this, as this was my very first project that got me into Linux and even into a Junior Sysadmin job!
You don't. Things that you use regularly automatically get cached by your brain; for the rest, I have an extensive set of tagged, searchable bookmarks (https://pinboard.in is great) and regularly search "Linux <thing-i-want-to-do>".
For setting the box up securely, Lynis will give you a lot of hints: https://cisofy.com/lynis/ . It scans the current config and tells you what you must do to secure the box. It might even be in the standard repositories (sudo apt-get install lynis).
Wow, this post has downvotes. Incredible.
Thanks for your efforts on this front, and I look forward to the OpenSSL TLS 1.3 audit.
Any interest in performing an audit on KeePassXC (github, site) and the latest KDBX format? It would be great to have an open audit of an open source, cross-platform password manager.
So, just a disclaimer, I work on an open source monitoring software called Prometheus. But I've been a Linux sysadmin type for 20+ years.
There are a lot of monitoring solutions out there, with many styles and many categories.
Over the last 13 years I've mostly focused on metrics-based monitoring solutions. SNMP is a style of metrics-based monitoring, as is Prometheus.
The key advantage of metrics-based solutions is that we can get both performance information, and alerting, out of the same set of data. Systems that are "check-based", like Nagios, are only able to poke at your systems with a very corse stick by comparison.
Now, syslog, or any other log event driven monitoring, tends to be extremely useful for debugging. But they are much more difficult to use to create alerts. So you end up having to take event streams and turn them in to metrics anyway.
TL;DR:
Metrics-based (SNMP, Prometheus) can be used to generate alerts that tell you where, and what timeframe, in your event logs (syslog, apache, etc) to look.
EDIT: FYI, Prometheus has a very nice SNMP agent/converter that allows you to ingest SNMP so that you can visualize and write alerts for your SNMP devices.
I share an office with one coworker (my teammate/supervisor). We both prefer lowered lighting and dark OS themes to minimize eyestrain.
My main PC is an HP somethingorother laptop-parts-in-desktop-formfactor provided by helpdesk. with Core i7, SSD and 16GB of RAM. Dual monitors (1 portrait, 1 landscape) with an ergotron mount.
I run CentOS 7, with a Windows 7 VM. (Our department gets to run whatever OS and software we want, but get only best-effort helpdesk for it.)
Major tools:
Good headphones are essential. My travel pair that I take to the office is a set of Jabra Move wireless headphones. I also like Audio Technica and AKG.
I was provided a Logitech wireless keyboard and mouse by helpdesk. I opted to bring in a mechanical keyboard from my collection (using a quieter type of switch such as Cherry Red or Cherry Brown).
I also have a Macbook Air 4GB for on-call and the occasional work-from-home (I haven't met an extremely lightweight, durable laptop that runs Linux well). I run the same tools (most installed via Homebrew), subbing iTerm for urxvt and adding Alfred.
I rock an Osprey Metron backpack, which pulls triple duty as a work laptop bag, a motorcycling backpack and a 24 hour emergency bag. (/r/edc, /r/ultralight). You'll usually find that and my motorcycle helmet on top of one of the guest chairs :)
EDIT: Oh, I forgot that you're never more than 15 feet from a mini-fridge on this floor. My workplace keeps them stocked with energy drinks, soda and flavored water, and I stock mine with sandwich materials as well.
>I know how to use iptables to block ip addresses, but I want something more automated.
Fail2Ban willl automatically ban IPs after repeated failures.
This book is what I’ve recommended to friends of mine in the same boat. Very easy to read and reference for future use.
The state is used to prevent certain nefarious actions. For example the first rule you have will let through an Ack packet for a connection that doesn't exist. However in the second case that ACK packet gets blocked because the firewall hasn't seen the SYN packet and resulting ACK packet opening a connection.
from a performance perspective this means that everything EXCEPT a SYN packet is allowed through based on the first rule in your iptables filter. The less rules it goes through the faster it gets in. http://serverfault.com/questions/578730/when-using-iptables-firewall-rules-why-assert-new-state-on-all-allowed-ports
from a security perspective there are a few TCP hacks that can cause a service to fall over. Things like sending TCP packets with various inappropriate control bits set. Most OS's should have this fixed by now, but if not...
GNU Screen
is a tool which works with a terminal session to allow users to resume a session after they have disconnected.
https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/
Keepass can use a remote database and it has a plugin for smart cards. Sounds like what you are looking for.
Amazon has a good article on setting up bastion hosts. You may also want to look into auditd. https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/
I think you're thinking of Teleport which unfortunately has been neutered of all enterprise features in the OSS edition (LDAP, SSO, etc.).
These kind of posts really piss me off. Not because it's someone asking for help, but because they don't post ANY information about what they're running.
You didn't even specify what distro, much less kernel versions, 64 vs. 32 bit OS, nothing.
Just "Hey guys, I have a problem."
So you wanted advice? Here's my advice:
Reinstall VNC. Go to google and type in "Installing VNC on CentOS" or "Installing VNC on Ubuntu" or whatever you're running.
Make sure your software is up to date. Run the latest kernel. Run the most recent LAN and video drivers.
And THEN come back here if you're still having trouble and provide the following information:
And here are links to the more popular distro instructions. Follow these:
http://wiki.centos.org/HowTos/VNC-Server
https://www.digitalocean.com/community/articles/how-to-setup-vnc-for-ubuntu-12
LFS = Linux From Scratch
Taught me a LOT, and if you want to build more than a basic system it has extended sections to build all sorts of things for Linux, from an office suite to various other packages.
Note that After= does not imply that the predecessor service is running, just the order in which they are started. If the first service is a prerequisite to the second starting in a healthy state, use both After= and Requires=.
E.g. most LAMP stack applications like WordPress require MySQL to be started before PHP can serve requests. In this case, modify apache/httpd.service to contain the following: [Unit] Requires=mysql.service After=mysql.service
If you are modifying vendor-supplied service files, your changes may be lost when the service is updated. Consider using systemd drop-in files to override the vendor file rather than editing it directly- your changes will not be affected by updates.
Neither is more secure than the other.
In order to exploit your machine, people need access, either physical or through the network. Whatever you open up to the internet determines your risk. Being a desktop/workstation I see absolutely no reason to open any incoming ports at all.
I don’t know much about Fedora, but Debian has a long standing reputation of being very secure when it comes to their choice of software versions. The downside is that the packages are somewhat old when a new release is about to hit. Debian usually also patches very fast.
Debian also has tools like the excellent debsecan, and an official manual on securing it.
As for SELinux vs AppArmor, I think SELinux is more capable, but it is very very hard to get it configured correctly, meaning you’ll probably end up with a policy with more holes than you intended. AppArmor is simpler, taking a file based approach, but because of its simplicity, it’s actually easier to configure just right. Both are great, and either one is adequate for most use cases.
Linux Insides give a lot of low level details about the kernel. More than any other resource I have seen except maybe for the kernel documentation.
I would suggest getting a VPS from Digital Ocean ($5 a month) and running through some of the tutorials on their site. They have some pretty good ones. If you don't understand something, look it up and try to learn everything you're doing.
I found mu to be pretty good for interactive use.
eg:
mu --maildir=~/mail f: s:"important" --fields=l | xargs mv -t ~/mail/important/cur/
I've found that moving off port 22 already drops scans by 99.9%, so on a public server there's really no reason to not do that.
Then do 1) fail2ban or similar, and 2) port knocking
Shameless plug for the monitoring system I work on: Prometheus
Free, open source, scales from my Raspberry Pi to as big as you could imagine.
It's also metrics-based, which means instead of checks, you can gather all the data from a service endpoint, visualize (Grafana) and alert on it.
The traditional answer is something like /u/redteamalphamale mentions.
Newer answer would be something like osquery(https://osquery.io/) or netdata as per /u/derprondo
The "new hotness" would be something like prometheus(https://prometheus.io/) paired with grafana.
Personally having used all of them, Prometheus is where I'm happily at now. It is a pull model, and can do metrics from machines all the way into applications, and is pretty awesome for a complete metrics solution, and it does alerting, etc as well.
Postfix is not set up by default to permit relaying. Open relays are why there is so much spam on the internet and pretty much everyone involved in good email hates it, so it's never default behavior. However - some setup scripts for Postfix (depending on your distro) may ask you some questions which if answered wrongly, would result in this behavior.
Glad you sought help. You messed up, but we all do. You'll get over it and hopefully learn. Might be worth giving your server a good long hard look in terms of general security too. (Just going on the basis that if one thing was open, something else might be and your server might be compromised - or vulnerable)
Downside is your IP address is almost certainly listed on a whole bunch of RBLs. But that's okay, we don't want you sending email again until you've learned a bit more about how to run a mail server - if you need to. If you don't need to, then don't. Same with every other service on an internet facing server; everything that's enabled is another security weakness.
When you feel you have learned enough, and have reconfigured your server and turned it on again, use something like https://mxtoolbox.com/diagnostic.aspx to test it to see if it's exploitable. That's a fairly basic test, but it's a start.
Adminning a server well is HARD. Making one secure is also hard. None of us get it right all the time, so feel a moment of shame, but move on and keep asking questions. It's how we all improve.
have you looked at Foreman?
Generally speaking, the first term that you're looking for is 'provisioning'. Once you have that covered your next goal is config management. That's where Puppet/Chef/Ansible/etc come into play.
As a side note, how tied are you to KVM? Do you need to virtualize non-Linux OSes, or could you use containers such as the LXC-based Docker or OpenVZ-based Proxmox? In either case provisioning becomes easier because you can control the guest's network configuration from the host, which means you're cutting out a large chunk of what's probably your biggest annoyance with VMs.
The best system I've found for managing images is the Fog Project.
Basically, there's a server that outputs a menu to the target client on boot via DHCP/PXE.
That menu allows you to create an image of that specific client (or groups of similar clients), which is then uploaded back to the server.
The image can thereafter be downloaded to that client, or any similar client.
You can modify the software installed on the PC to your heart's delight but if you need to restore it to its original condition, you simply select an option from the PXE boot menu.
This allows you to create several images with different configurations for your PCs.
Restoring an image usually takes less time than making a cup of coffee.
Of course, for a Linux installation, you'd mainly be concerned with restoring the root partition (and other optional partitions).
One can get quite fancy with this tool. It's entirely possible to have different distros saved as images.
You can manage thin client images or fat client images.
This is not the only way to do this, but it does make things easy to manage.
I have an oVirt test lab here that works quite nicely. I would recommend you use CentOS 6.5 as your base OS it's what I had the least pain with. One thing that I think you'll like is that it comes with a 'User Portal' so you can delegate control of the VM(s) to end users (in my case developers, but it could easily be customers).
I would say first priority is getting backups/DR squared away and after that getting the old machines virtualized.
Nagios is a good idea for monitoring, there are also other options out there should you go that route. I would not recommend webmin, as it's had some security issues.
Ultimately, if there are so many places where outdated hardware/software/whatever are going to cause problems for your company in terms of downtime, budget is the only thing that will fix this. I would try to put together some sort of documentation along the lines of $x for new servers/hardware/etc vs $y for downtime.
I typically use google reader, so if you'd like you can download my OPML file. I have about 162 subscriptions. I hope this is what you're looking for! All you have to do is log into google reader -> reader settings -> import/export tab, and in there you can import the OPML file.
EDIT: I just want to let you know that there are some RSS feeds that include UNIX/Linux sysadmin stuff, but a lot of if it includes just technology in general (Electrical/Computer Engineering projects as well!)
> Arch. Actually this is something that was ”solved” by downgrading the packet
you should also be posting in /r/archlinux. Especially because the package maintainers hang out there periodically.
Alternatively, the #archlinux Freenode IRC is very active.
Is this founded on your being more comfortable troubleshooting linux mailservers, or on an assumption that it's got to be easier? Exchange isn't really a bad option for the what you're replacing it as.
I'd go with Zimbra, though, it's the least-disappointing thing I've ever migrated anyone off Exchange onto.
https://mailinabox.email/ is great if all you want is the email bit, but if you've got a working Exchange server then chances are people are using calendars and contacts and all the other groupware good stuff.
FreeIPA can do some of the AD stuff (like auth) but not the GPOs that you don't need, if you run a fileserver then you'll probably want some sort of centralised auth.
RHCSA as of last week :-). As others have said, I recommend Linuxacademy.com, Jang's RHEL 7 book, and this video series: https://www.safaribooksonline.com/library/view/red-hat-certified/9780134193281/
you can get a free Safaribooks trial and watch it all, but I STRONGLY recommend you wait until you have finished linuxacademy labs AND the RHCSA section of Jang's book as it is not very in depth, just a review.
Also, I have created these flashcards that may help, I set them up backwards, so choose "see term first" when you use them. They helped me immensely. Just remember, for most of them, I set all paths to the absolute paths, not relative. https://quizlet.com/pathfinder2210/folders/rhcsa-cards
It looks like your not allowed to use mirror.centos.org unless you want to run a public mirror.
"This service is intended for the sole use of the CentOS worldwide mirror network to synchronize mirrors.
Unless you are running or intending to run a listed public CentOS mirror use a mirror listed at http://www.CentOS.org/modules/tinycontent/index.php?id=13"
You could implement freeIPA which incorporates Kerberos authentication. It might help with your situation. I have been using it in my lab environment and it is working quite well.
If this is all a bit new to you, I'd suggest going with a semi-managed service that can track Linux compliance.
These guys have the tools (OSSEC) to monitor compliance, as well as enact the security controls you are looking for: https://wazuh.com/
You can get a certificate issued for the new server by doing a DNS challenge instead of an HTTP challenge. You can do this manually instead of auMovetomatically if you don’t want to set up the plugins and automation for your DNS provider - all you will need to do is create a text record in DNS. Documentation; https://certbot.eff.org/docs/using.html#manual
Doing the DNS challenge means that you’ll be able to get a certificate in place on the new server before you have moved the A record for the domain, and you can have both servers up and running at the same time for a zero downtime migration.
Steps for a relatively easy migration
This way you had zero downtime, and were just in a read only mode during the migration. Because the TLA certificate was pre-staged, you were able to get any issues on the new server worked through before actually doing the migration.
You could also throw a test migration in there while the old server is still serving the site. No read-only, set up a new domain or sub domain and so the migration. Work through and server configuration issues, and then re-do the migration when you’re ready. That is what I would probably do personally.
Linux will use all the RAM you give it, if it can, for file cache. Notice that cache is 7.1GB in your first screenshot and 28.5GB in your second. You might want to play around with vm.vfs_cache_pressure to make the kernel more aggressive about dropping cache entries.
Everything that has the smtp port open gets a lot of attempts to relay spam. If you don't intend to run a mail server, just disable postfix. If you do run postfix on purpose, please make sure to secure it.
Seconded for Putty on Windows. Although I usually have multiple sessions to multiple hosts open at once (anywhere from 10-12) and switching between Putty windows sometimes was always a pain in the ass which is why I ended up installing SuperPutty which basically takes all the Putty windows and puts them into tabs in a single window. For me it gets around the limit on the free version of MobaXterm with how many sessions can be imported from Putty.
Do you have to use a Live CD?
Assuming your hardware and environment support it, this seems like a good use case for network installation with a preseed file to tell the installer what packages you need and what actions to take after installation. There are tons of tutorials on how to set up a basic DHCP + TFTP server for PXE. You can also use tools like Cobbler or Foreman to manage it. Cobbler can also create an ISO including the install media and your preseed file for one-touch installs.
If the machines have remote management (i.e. IPMI, Intel AMT, or similar), you might also look into MaaS to manage PXE installations on many nodes. This is more useful if you manage the lifecycle of the machine post-imaging as well.
Seconding backuppc, it's an old classic made of Perl, but it is not only agentless, it also dedupes files in a central store and allows user controlled restores if you wish.
It can also handles laptops and machines that may not be on, and dynamic schedules.
It is also easy to customize what every remote command is/does, so hacking in things like ssh force_command for passwordless authentication to a backup operator account can be done with relative ease.
It requires a some elbow grease and a large measure of Knowing What You Are doing, but the benefits are totally worth it, IMO.
I would, but as this complete set is combined from a Crontab/PHP/SSH/JS/CSS setup, I'd rather not but I'll give sources for the CSS/JS tweaks used for the effect:
EKG-effect is based on this git: http://evandrewry.github.io/svg-electrocardiogram/ (with embed SVG unlike in the git)
Glowing CRT-like texts are based on this codepen: https://codepen.io/somethingformed/pen/raWJXV (with also little tweak)
Background graphics and rest of the CSS are my own handy work as well as the PHP code, but to put it simply, it is a mix of PHP getting information from the shell from /proc/uptime, free-command and testing ports with php fsockopen, mysqli/PDO connects and so on.
Lot of bubble gum and rubber bands tightened together :-)
We're not using Chef (SaltStack), but this might be helpful as you form your workflow. Basically, we spin up infrastructure on our laptops using Vagrant virtual machines to do any testing first before committing that code and deploying it in production. In our case, we can just run a masterless call, but if you needed to have another VM that is also a local master, you certainly could. Hypothetically, we could spin up "one of everything" and make sure nothing explodes during an apply.
This workflow saves us 99% of those mistakes we used to make testing live, be they syntax errors or just plain wrong things to do. Also, since all the work is done locally, we never step on each other's toes until merging (the code we know should already work) into production and verifying.
Edit: I wrote an awful blog post about it.
I think have an answer now about whether to expect those kind of scans - the answer is yes.
Personally I'm not a huge fan of security through obscurity but if you're annoyed by the scans you can also add port knocking to further help hide your SSH port. Digital Ocean has a [tutorial] on the subject.
In fact if you're able I'd combine that with exclusively using an IPV6 address for SSH; IPV6's address is space is so large that scanning it is impractical.
You can't install everything, many packages are mutual exclusive.
Anyway, I suggest you look at the mirror sizes
Right now all of 'amd64' probably all supported releases (old-stable, stable, testing) is about 286GB as packages which are compressed. Obviously it would take more space uncompressed, but also you wouldn't have all releases.
Pulled this from here. Assuming your talking about ISC DHCPd, you need to define classes then you can assign pools for certain classes.
class "kvm" { match if binary-to-ascii(16,8,":",substring(hardware, 1, 2)) = "56:11"; }
class "local" { match if binary-to-ascii(16,8,":",substring(hardware, 1, 2)) = "52:54"; }
host meme { fixed-address 10.1.0.254; }
host server247 { hardware ethernet 52:54:00:2f:ea:07; fixed-address 10.1.0.247; }
subnet 10.1.0.224 netmask 255.255.255.224 { option routers 10.1.0.225; pool { allow members of "kvm"; range 10.1.0.226 10.1.0.235; } pool { allow members of "local"; range 10.1.0.236 10.1.0.240; } pool { # Don't use this pool. It is really just a range to reserve # for fixed addresses defined per host, above. allow known-clients; range 10.1.0.241 10.1.0.253; } }
On top of the CTRL-R suggestions, you can also try https://github.com/junegunn/fzf (fuzzy search for reverse history). That plus the vim plugin makes it a breeze to switch files or open in new window/tab
Red Hat Enterprise Linux Cookbook looks pretty good. RHCSA/RHCE Exam Guide by Michale Jang is also available now on Safari Books Online as well.
Certs? :/ I highly recommend pursuing a bachelors degree from RTFM university.
Linux From Scratch is a great way to learn about how Linux works and is put together. http://www.linuxfromscratch.org/
This will show the git status along with changing the prompt based on the user (root vs pleb). This also checks the status of the previous commend to let you know how it feels.
One of the odd things I've had to do is add gpgconf --launch gpg-agent
to the prompt so that when I kill gpg-agent it'll respawn. I sometimes have to kill it because my gpg card is sad (already bugged upstream).
https://git-scm.com/book/en/v2/Git-in-Other-Environments-Git-in-Bash for the git stuff
#git command prompt GIT_PS1_SHOWCOLORHINTS="enabled" GIT_PS1_SHOWDIRTYSTATE="enabled" GIT_PS1_SHOWSTASHSTATE="enabled" GIT_PS1_SHOWUPSTREAM="enabled" GIT_PS1_SHOWUNTRACKEDFILES="enabled" source /home/"${USER}"/.bin/git-prompt.sh
ps1_prompt() { local ps1_exit=$? local my_user=$(id -u)
local time="[\033[m] [\033[1;35m]\t[\033[m] " if [ $ps1_exit -eq 0 ]; then local ps1_status='[\e[0;32m]:)[\e[0m]' else local ps1_status='[\033[1;31m]:([\033[0m]' fi if [ ${my_user} -eq 0 ]; then local userhost="[\e[1;31m]\u[\e[0m]@[\e[0;36m]\h[\033[01;34m] " else local userhost="[\e[1;32m]\u[\e[0m]@[\e[0;36m]\h[\033[01;34m] " fi
PS1="${time}${userhost}\w[\033[00m][\033[1;31m] $(__git_ps1 "(%s)")[\033[00m]${ps1_status} " }
have a pic.
Update: Thanks to a little help, the flow of illegitimate email has been halted. I was able to use https://mxtoolbox.com/diagnostic.aspx to verify that it's setup correctly now.
Unfortunately, I now have to personally answer two questions: 1. How did this happen after two years of running smoothly? 2. What's the long-term fallout?
I'm not too confident that I'm going to like the answers to either of those, but I should be able to figure it out on my own from here.
Thanks for all the help!
Yes, basically. From their FAQ: >We provide not only that but also pro-active support with installation and security issues, we reach out when we do feature planning to ensure your needs are served and we support Nextcloud long after you would otherwise be forced to upgrade for security, performance and stability reasons.ons.
You can boot into recovery mode yourself via the control panel in DO.
also, this is why i put ALL of the app specific stuff (including db) on block volumes now.
First step is to move mysql datadir, /var/log, and any of the vhost, ssl certs off to /mnt/vol-blah-01. It allows me to essentially unmount, and remount the volume to a new "droplet" and be up and running quickly.
i symlink the following dirs
I also move the mysql datadir to /mnt/volume-sfo2-01/mysql
, you can use this guide on how to do the mysql datadir move.
All of the web code is on /mnt/volume-sfo2-01/domains/<domain>/www
all of the nginx specific logs (error.log, access.log) are at /mnt/volume-sfo2-01/domains/<domain>/logs
.
The end goal is basically I should be able to build a new droplet from an image, mount a vol and it starts working nearly automatically (once you restart services, etc).
Load average is one of the most bunk, useless, metrics you can look at.
If you really want to know about load on the system, take a look at Pressure Stall Information from /proc/pressure.
(Hint: The node_exporter collects this by default)
If your problem is that CUPS can't connect to the printer - specialized USB driver or something - you can use rawprintserver to have the windows box present the printer as a jetdirect on port 9100.
Unless you're planning on expanding in the future, I'd suggest opting for one of the big-name mail hosting services for a single account (Google, Microsoft, etc).
Beyond that, I'm a huge fan of mailcow.email - It's probably a bit more resource heavy than you're looking for, but it makes everything an absolute breeze to manage.
I used to maintain a mail stack (non-mailcow) for my own single account/domain uses and it eventually became too much of a chore for me to continue with.
​
Edit: I see you're looking for hosting suggestions more specifically (I can't read) - On that front, any of the larger name VPS/Cloud server providers would likely meet your requirements at a reasonable cost.
Correct! I haven't set one up myself, unfortunately. There seems to be a decent guide at the link below. I do a lot of Small Business Consulting and have setup some Samba 4 Active Directory Domain Controllers with great success but still haven't got the chance to try a Linux LDAP Server.
Best I've found for such forensics: atop. Not only you can see in the past like sar, but you can see per process and then the cherry on top: it's not a simple snapshot, it logs processes which exited between snapshots too so you get a perfect picture of which, when, how much. I have it on all my servers. A good article about it: http://lwn.net/Articles/387202/
I really like the idea of CopperheadOS, but I really don't want to buy a new device now that they'll only support for ~1yr (2yr for security updates). That comes across as being incredibly wasteful.
I currently use CyanogenMod on my 2 year old MotoX gen2, and will likely be able to keep using this device for at least another year (or more!) since the CM folks aren't dropping support for it yet.
Compile your own from source - http://www.linuxfromscratch.org/blfs/view/svn/server/sqlite.html - being careful to not install it to anywhere centos expects things
Or bring up a quick debian vagrant VM since recent debian seems to have 3.8.x there....
I'm starting Linux From Scratch as well as watching videos for the RHCA/RHCE exam. I'd do LFS, then once you learn how to build/manage a single machine I'd start working on systems and services like Chef/Puppet/Ansible, Samba, DNS, DHCP, LDAP, Apache/Nginx, etc...
After you learn how to build/manage a single system, refer to this comment on becoming a Linux sysadmin: http://www.reddit.com/r/linuxadmin/comments/2s924h/how_did_you_get_your_start/cnnw1ma
Have you looked into GLPI? It integrates into OCS . http://www.ocsinventory-ng.org/en/about/features/ocsng-glpi.html
I have a buddy who runs IT for a county sheriff office and uses it to track everything.
Use one of the publicly available mirrors that also support customer rsync.
The one that I used to run for the VT Computer Science Department supports rsync, for instance.
Don't use wget. Rsync is the proper tool for this. It handles updates, changed files, atomic writes, etc perfectly.
Of all of the terminals I've tried in Windows, MobaXTerm is the one that I was the least unhappy with.
It has X11 built in (not that I use it). Sftp. Transparency. Tabs. All those goodies.
Definitely worth a shot if Putty, kitty, etc don't match up to your normal "linux/mac terminal" standards.
I think it's the only one that supports mosh, too.
Normally when I run multiple services on the same machine I would consider adding a secondary IP on the primary interface instead of using two interfaces. When I have two interfaces it is specifically to provide hardware redundancy in case one link, adapter, port, or switch goes down (or someone accidentally unplugs it).
So what I'm getting at is that if you only want to consolidate services it would make sense to just shut down eth1 and add the secondary IP to eth0.
Now... if you really want to use both interfaces on your LAN, I think the right solution here is to use the Linux bonding driver. You create a bond interface and add both eth0 and eth1 as slaves -- remove the IP addresses from both and add them to the bond interface. I don't know how familiar you are with networking, but the premise is that you can have your server appear to your unmanaged switch as just another switch with the host on the other side, if that makes sense. You won't get a true "2GB" from two 1 GB interfaces without a managed switch with 802.3ad support (since a single stream can only go on one interface), but you can for most purposes achieve what you want. The only trick here is selecting the correct bond mode.
Based on my limited understanding of your setup, you probably want balance-rr, balance-xor, balance-tlb, or balance-alb.
IIRC the setup for this is to add a few lines to /etc/modprobe.conf, remove the IP addressing from /etc/sysconfig/network-scripts/ifcfg-eth{0,1}, add "MASTER=bond0" and "SLAVE=yes" to aforementioned eth interfaces, then create /etc/sysconfig/network-scripts/ifcfg-bond0 with the appropriate IPs and settings. I can probably provide examples if you can't find them yourself and this is how you want to go.
Look into Open BroadCast Software specifically they have a linux client as well. From there if you are wanting to host all parts yourself you would set up nginx as a rtmp relay/buffer proxy. After that you need to set up a flash/java/html5 player to point to the nginx rmtp server. I personally never went that far (I just collect all my different streams to one central server and have another OBS switch between those for multi-computer streaming stuff)
> Is there a particular one that you see as being better?
I just use puppet to set everything up so doing stuff by hand doesn't really enter into it for me anymore.
I've referenced this one in the past, though. It looks decent.
That'll get you to a functional state. You can tune more after that.
Torture the hell out of nginx using this:
https://stackoverflow.com/questions/12732182/ab-load-testing
This may not be an issue with the server security, but an issue with the e-mail server/network setup. Your server can be completely secure, but if you're running an open SMTP server on it people can send whatever they want.
MX Toolbox tests may be able to tell you if your SMTP server is open to the world.
You can also try to connect to it from a computer outside your network via telnet and attempt to send e-mail.
Also, there is always the possibility that the e-mails are originating from another server and there's some type of phishing/spoofing going on.
Probably not possible to preserve passwords, unless you had the "store in reversible format" or whatever it is checked. Check out their migrate page, you will be able to do it via LDAP http://www.freeipa.org/page/Howto/Migration
>we don't use any directory services ? is that bad ?
It's not necessarily "bad" but using one would only benefit you. Especially as you grow. Doing it now on 145 machines is going to be significantly easier than 500+ later.
I highly recommend FreeIPA.
KVM and LXD. Give some time to learn LXD because is going to have a really nice integration with juju and openstack.
Some nice features around LXD:
It uses images instead of templates as LXC does.
Unprivileged containers by default (USERNS) !!!!!! <---- this is super important.
Live migration.
API Rest to manage the containers and the server.
Integration with Openstack.
Mailing list/IRC are really helpful.
Consume fewer resources than KVM to deploy linux servers.
Support for different backing storages.
You might look into using a service like CloudFlare. I've never used it personally, but it should be able to cache your static file at hops close to all your users. The devil is in the details as usual.
Like your friend, I really recommend learning a flavor of Lisp. The intro to computer science courses at my uni were all taught in Racket, an evolution of Scheme which came from Lisp. I loved using Racket and the textbook we used is available completely online for free[0]. That said, if you want to learn a more widely used Lisp, I'd recommend Clojure which runs on the JVM; there's many great books on it available.
You likely won't use Lisp in your day-to-day life but the ideas and ways of thinking you get from learning it will influence how you think about and design programs in other languages. For example, I use mostly Python and Ruby and feel learning Racket/Lisp has made me a better programmer.
[0] http://htdp.org/2003-09-26/Book/ -- it uses "beginner languages" in Racket in order to teach good practices. If you just want to dive straight into Racket, the documentation can be found here https://racket-lang.org/
Vagrant has a thing called Synced Folders. Its actually enabled by default for the main project folder. If you just need to move files between your Windows host and your Linux guest VMs this is how you do it. If you need other folders synced its as easy as editing the vagrantfile. The vagrantfile provided to you might already have all the synced folders you need already setup. Bonus: no network connections needed.
Why not just a Bastion server? I've spun up many of these and there are quite a few awesome ways to do this, here is a link to one of the best setups I've seen and used in a while. https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/
Give FreeIPA a shot, take a look at Digitalocean's docs: https://www.digitalocean.com/community/tutorials/how-to-set-up-centralized-linux-authentication-with-freeipa-on-centos-7
If you have any questions, shoot them my way.
Yep. Probably got smashed by http requests at some point. Could reduce number of connections and add fail2ban. https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
Yup, and then I would realize that "load average" is a useless metric at best, and misleading at worst. Then stick to CPU saturation and kernel PSI.
Like /u/Enoxice said using vboxmanage storageattach is how you attach an iso to the VM. See here for an example. You can then enable vrde for the VM to get a gui console by connecting using RDP.
You could also use ssh -X or Xming to redirect the display over ssh and use the VirtualBox GUI as you would normally.
I'm all-in on the RHEL/CentOS train... especially when dealing with anything that needs to run baremetal or have any interaction with hardware. It's a huge consideration for support.
One of my largest clients runs Debian, though, and it's eye-opening to see how many small things are missing compared to the engineering decisions that come out of the RHEL ecosystem. Luckily, the environment is completely virtual, so the hardware interactions are abstracted away.
But this is Linux! Anything can work with the right type of effort and sane practices. But given a choice, it's going to be RHEL/CentOS for me.
We always just add the new disk, rescan scsi, pvcreate, vgextend, and lv. Not sure if I've ever done it live like you want to, but I found this while googling.:
>The other answered provided do not address your question, I've identified the correct command to rescan an already connected disk.
>We must rescan your already connected disk, first identify which disk you want to rescan.
>ls /sys/class/scsi_disk/ >In my example, I see a symlink named 0:0:0:0, so we rescan this scsi-disk.
>echo '1' > /sys/class/scsi_disk/0:0:0:0/device/rescan >I just extended my VMware disk also, and had to scour other answers to find the correct command. Hopefully this will save future searchers from futile attempts.
Best of luck and please update if you get it!
Not Minecraft call of duty 4. I made my first call of duty 4 server when I was 12. It's still up and running. It's currently the 3rd most famous cod4 game server in gametracker https://www.gametracker.com/server_info/128.199.145.90:28960/
It's for outbound mail. To get past almost all basic spam filters. That's why they care about pointing out the hostname matters. That said, serv1.server.com is fine. If you want it to be something different you can edit the config of your MTA.
Apache/nginx have nothing to do with sending or receiving mail.
You'll just want to familarize yourself with postfix for the RHCE; don't ever waste your time on sendmail unless you hate yourself or inherit an environment that's already using it. Check out the docs in /usr/share/doc/postfix-*/ or at http://www.postfix.org/postfix-manuals.html. Read the architecture overview for what's going in master.cf (which you shouldn't need to touch unless you're doing advanced submission or filtering). And the basic configuration doc for main.cf, which has various things you might need to toggle (trying not to give away what's on the test). Presumably you already know about /etc/aliases and how that that works, but if not, it's a good thing to know for redircting things like cron mail.
You mention sudo. That's what should be used. There are numerous tutorials online, and this is a good example.
This sounds like a perfect use case for Nagios. Nagios is a monitoring framework and dashboard that is the standard for ensuring your Linux systems have all the right bits configured. The majority of plugins are written in Perl, with a handful in C and Python.
There are a lot of resources online for getting Nagios setup, and DigitalOcean has some solid guides for getting up and running.
You can't just use something like LiLi or Unetbootin? They'll even download the iso for you. I usually use one of those for USB installs; it's pretty easy and somewhat foolproof.
It always helps to state your actual problem and not hypotheticals you think are related or how to implement what you think the solution is.
Given your actual problem you should check the sysctl for ip_local_port_range and ip_local_reserved_ports ...
https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
Configure those appropriately for your use case, or avoid trying to listen to ports within that range on your systems.
You mean this one? If so, then maybe I wasn't clear. I know how to upgrade a Debian system (have done this countless times), but I have zero experience with systemd. I want to learn what changed, what's new, the systemd way of doing stuff vs the sysvinit way, etc.
At least with Debian/Ubuntu, most of the /var/logs/* are associated with the 'adm' group. The 'adm' group is a convenient way of letting users read these logs without su privileges.
Note that the 'adm' group is different from the 'admin' group which grant sudo privileges.
Source:
Disclaimer: I use realmd+sssd, so YMMV if you don't use realmd.
Credential caching should happen out of the box.
In /etc/pam.d/common-session you want to append
session required pam_mkhomedir.so
In /etc/ssh/sshd_config you want
AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys
You may need to tell SSSD what LDAP object to get the SSH keys from.
You can optionally set up a FreeIPA server with AD trust integration
Generally my steps to configure things are:
If using sssd+realmd
kinit realm --verbose join domain.org --user-principal=<hostname>/ --unattended
If using IPA client:
ipa-client-install --mkhomedir --ssh-trust-dns --request-cert --hostname "$(hostname -s ).domain.org" --enable-dns-updates
In both cases you need to remember to configure /etc/pam.d/common-session. Depending on the distro you may need to configure the AuthorizedKeysCommand in sshd
> I think some Oracle systems like having access to raw block storage as well - which I don't know is possible in a container situation... Certainly not without a lot of messing around.
See the <code>lxc-create(1)</code> -B
option, which allows for the following storage options:
Among others.
I'm not sure if you have permissions because you seem to keep saying you're locked down. However you're going to need for permissions to save anything pretty much.
However you can DD the disc over SSH. You can then read DD it's onto another physical disk or you can mount it on another host and browse the files that way
https://www.linode.com/docs/platform/disk-images/copying-a-disk-image-over-ssh/
I recently setup netdata on a couple servers in my homelab to test it out. I haven't spent much time with it yet but just did a scroll through and it has CPU usage breakdowns for process, applications, systemd processes, etc. This may be what you are looking for.
It's possible that they're just blocking all HTTP proxies and reporting the error as "out of country". Also possible they're looking the X-Forwarded-For: header, which reports a 192.168.x.x IP, which they're considering out of country.
You should try removing that header see if that helps.
It's best to have offsite backups if you can, rather than relying on a USB hard drive. We successfully use duplicity to back everything up to S3, but it could get quite expensive for large amounts of data.
Syncthing. It's open source, single-file executable, encrypted transport, and multi-platform. You can setup direct URL's to the servers and not use a discovery server (or host your own) to increase privacy if desired.
You're looking for VM Import / Export, check it out at:
Thunderbird is also amazingly slow at bulk email deletions. I don't know what it's doing as I've not had time to look at our server logs but deleting a couple of hundred emails in one hit takes forever. By comparison Roundcube, which I also have pointed at our IMAP server for when I'm on someone else's PC, take a second or two and it's done and that seems pretty normal.