https://www.loggly.com/blog/how-to-detect-and-analyze-ddos-attacks-using-log-analysis/
In your command prompt, netstat -an (-a for active connections, n for port numbers instead of protocol) will show all active network connections tied into your computer. And yes, it's actually very easy to suss out IP addresses- Company of Heroes 2 uses peer to peer connections instead of a dedicated server and there's tons of free tools you can use that let you tie a user to an address.
You may want to find a log analysis tool to actually make sense of the data. If you want to pursue legal action- it's illegal- you will almost certainly need those. If you talk to your ISP or Relic or Valve you'll also want those- I doubt they want someone using their services to commit a federal offense.
This may be systemd-journald logfiles growing in size.
Here's a reasonably brief guide or the arch linux wiki guide on using journalctl as well as reviewing logfile size and configuring journald to restrict overall logfile sizes.
Typically i find using this config located at /etc/systemd/journald.conf
to be ideal to limit logfile size and prevent journald from printing logs to vt1 to mitigate system information exposure.
[Journal]
SystemMaxUse=500M
SystemMaxFiles=10
ForwardToSyslog=no
ForwardToKMsg=no
ForwardToConsole=no
ForwardToWall=no
specifically typing journalctl --disk-usage
will display overall system logfile usage.
$ journalctl --disk-usage
Archived and active journals take up 920.1M in the file system.
Without this config enforced i've had journald consume several gigabytes just for system logs.
Now if you happen to have disagreeable hardware and systemd-journald is having a tantrum and you don't have the logfile size restricted the result is likely to be predetermined :)
If you're going to be running a blanket catch like that, you should be also logging the exception in some manner, otherwise other exceptions will be caught and masked from your attention.
Here's a few useful things for you too:
A useful article on logging in Python.
An example script where I except RequestException
Hopefully these are helpful.
loggly have some kind of embarrassing blog posts (e.g., https://www.loggly.com/blog/why-aws-route-53-over-elastic-load-balancing/)
This regex should raise a red flag to anybody who knows something about regexes:
field1=(.) field2=(.) field3=(.) field4=(.).*
A much more efficient version is this:
field1=(.?) field2=(.?) field3=(.?) field4=(.?).*
...which runs about 5x faster than the greedy version, and about 10% faster than the character class version (on the JVM at least).
There are a couple posts on Loggly's blog that may be of interest: • Logging best practices: https://www.loggly.com/blog/log-log-proven-best-practices-instrumentation/ • Logging in JSON: https://www.loggly.com/blog/what-60000-customer-searches-taught-us-about-logging-in-json/
Hope that helps.
Logstash, free and awesome, http://logstash.net/
Use NXlog on the windows endpoints to ship to your logstash server. Make sure you consider encryption since this will be going over the internet.
Take also a look at the new Graylog2
Loggly is the only one I remember right now, havent tried it. https://www.loggly.com/
A few resources to get you started with regards to Linux, firewall logs, and figuring out "what's doing what". Sorry I'm not of more help with regards to this:
https://www.tecmint.com/best-linux-log-monitoring-and-management-tools/
https://www.loggly.com/ultimate-guide/analyzing-linux-logs/
https://www.linuxquestions.org/questions/linux-security-4/iptables-log-analyzer-334473/
I'll flair this "Linux Firewall Expert Needed" as you could use someone to help guide you through configuring the firewall, logging connection information, and searching it for the relevant data here!
Im assuming you are talking about WMI. What version of windows are you working with?
You can forward your events if you have the right Windows OS
https://msdn.microsoft.com/en-us/library/cc748890(v=ws.11).aspx
https://www.loggly.com/ultimate-guide/centralizing-windows-logs/
What the other guys said about haproxy. I used it, briefly, for a project before we realized ELB would (it's complicated) be a better choice.
Have you considered Route53? https://www.loggly.com/blog/why-aws-route-53-over-elastic-load-balancing/
If your system is compromised, reset it and rebuild. The attacker would have put any number of backdoor which will still be there after shutting RDP.
If you want to use RDP
I don't think you would find a perfect AWS fit. Usually logs are written to S3 as backup. AWS has a lot of solutions for moving logs from CloudWatch to S3 and Kibana, but not from S3 to other places.
There are a bunch of solutions though. You can schedule a daily EMR job to read logs from S3 and write them to Kibana. You can use S3 events to trigger a Lambda to load the data (maybe with Kinesis), as NCFlying suggested. You can try logstash. You can also try loggly or ChaosSearch.
Summary about python logging:
https://www.loggly.com/ultimate-guide/python-logging-basics/
Handlers:
https://docs.python.org/3/library/logging.handlers.html
You can make your own handler that filters data.
>Like I said, the amount of calculations PoE needs to deal with things like proliferation is absurd. Logging all those events would be equally absurd.
??????????????
What the actual fuck are you talking about? A crash/error reporter exists for them that does essentially the same thing.
>Ask yourself if the observer really needs to know all the conditional stuff about the Hero or Skill, which couples it to those classes.
Really to-the-point quote from an actual game dev site.
https://www.loggly.com/blog/nine-tips-for-implementing-logging-in-games/
https://www.gamedev.net/forums/topic/646647-creating-a-combat-log-system/
It depends on systemd journal settings and the size of your disk. By default, systemd will limit the size of your logs depending on the size of the disk it's on.
See here for more insight into systemd journaling.
Couple things to check.
If you go into Settings->About and scroll to the bottom, is there an IP address listed (while not being connected to USB, and not the 10.11.99.1 address)?
You could also try the 'reconnect my device' option on the my.remarkable.com website.
Have you fully restarted your device? Hold down the power button, count to 15. Let go, count to 15 again. Turn on.
I won't go over how to SSH into your device, as there's other guides out there. But once you're in, you can use the journalctl command to view logs and see if there's some hint of what's causing your problem.
Run a 'check-sync' from your device first, then SSH into your device and try:
journalctl --since "1 hour ago"
You should see the latest logs for your device. Look for errors.
Some basics on the journalctl command.
Good luck!
This is pushing the limit of my Linux knowledge, but I found a couple things that might help: https://geek-university.com/linux/syslog-protocol-explained/ and https://www.loggly.com/ultimate-guide/linux-logging-basics/. If you ask in /r/linuxquestions about this you might get some better help, lots of geeks there. Hope that's enough to get you started.
So if you are willing to waste a ton of time and money on servers then Apache or Elk are fine, but just pay the small fee and use Loggly, an elasticsearch alternative.
> Debian
For me, Centos, mostly. Some FreeBSD, some other less popular unix-ish operating systems. But mostly that is employer dependent.
Having 2 simple config files and 2 daemons is not any more complicated that having systemd with it's many configs and multiple processes. syslog/logrotate is probably less complex and easier to debug than systemd, from my experience.
As for what systemd doesn't do that makes life hard... centralized logging.
https://www.loggly.com/blog/why-journald/
Read the second half of that.
The systems I'm responsible for all centrally log to one (set of) machines so that all logs are aggregated.
systemd doesn't currently fit well into centralized logging/monitoring systems, and getting it to do so correctly... is non-trivial.
Just started using Loggly, you'll get 30 days of Enterprise for free. After that, you won't get any alerts or fancy things, but 200MB / day of logging for free aint bad.
It was pretty easy to set up too: https://www.loggly.com/docs/python-http/
You don't even need to install their library, just send the logs to the right url.
There are several options:
You can look at cloud logging services like loggly. I'm not really familiar with the logging landscape on Linux/Unix but they do have a guide/script to configure syslog.
AWS also has CloudWatch which might do what you want.
You're probably going to want to use a 3rd party logging software.
Loggly is free for less than 200mb of logs a day, with a 7 day retention period. So if you're fine with 7 days, that should work.
Here's a guide on sending s3 logs to them.
You can do similar with logentries (also hosted and free for low data/retention), or splunk (you can get longer retention, but you have to host the thing).
Unfortunately all of them require a certain amount of setup knowledge to get working.
Graylog seems a nice solution for that. It's open source and it has alerting built in https://www.graylog.org/
If you have some money to spend, then maybe using Loggly (https://www.loggly.com/) would be simpler
Or if you want to offload the chore of log management completely, use a service like loggly. Agentless, with alerts and reporting plus a free trial. Loggly + free shirt!