Graylog CTO here. The timestamp in GELF is a required field and SHOULD be set to when the log message was generated and not when the GELF message was sent.
This is most likely wrong behaviour of the nxlog agent. You could alternatively take a loot at fluentd, which reads Windows event logs and supports GELF/Graylog as an output, too.
>Am I on the right track with this
For starters I would get everything coming into one spot via a unified layer
This would replace logstash in your ELK solution but would be used across the board for everything (logs, metrics, flows, whatever)
From their you pipe down into the desired end point, logs to elasticsearch, metrics to influx.
The cool thing about this is you no longer are tied to something...sick of elasticsearch and want splunk you dont have to edit tons of shit, just a single location
Generally, "production" applications tends to use fluentd - one agent to collect all logs. Then you would install plugin to export it whenever you prefer: http://www.fluentd.org/plugins. I have a setup when my docker containers log to stdout, but in production fluentd picks up those logs and routes them to cloud logs aggregation service.
Source: I wrote my own logger as well (https://github.com/tensorflight/stackdriver_python_logger), but now I use fluentd :).
Disclaimer: I wrote that post.
It's written in a mix of C (for fast computing stuff) and Ruby. Fluentd is used widely, it's an enterprise solution, we at Treasure Data use it to collect around 800k events per second, among many others:
Maybe checkout fluentd and its list of "data outputs". Could be that the outputs are useful to you even if you end up not using fluentd.
Graylog2. You can use Fluent for log gathering and Graylog2 forwarder (http://www.fluentd.org/guides/recipes/graylog2) Another option is Fluent + Elastic + Kibana/Grafana. If you don't have enough RAM for Elastic try Fluent + InfluxDB + Grafana, but it less informative.