Are you still looking for advice?
SEO is a long-term strategy, and there are two kinds of optimization you can do: off-site and on-site. Here are some of them:
You can check if you have properly set up your domain name redirection using Hexometer.com
https://hexometer.com/broken-links
For example, if your domain name is not redirecting properly, most likely you will see there an indication about the hat.
Later on, you can share that results with Godaddy support to address the issue.
Thanks for reading the article!
We started out with Apollo in memory PubSub, but it was eating a lot of memory, and it had really bad sync loops over PubSub topics and subscribers, which ended up blocking a lot of stuff for the entire Node.js process.
We tried to move to Redis, BUT I personally had a lot of issues with it when it came to scaling, that's why after having the first proof of concept, we just replaced Redis with Nats.io, which ended up working amazingly! It scales seamlessly and had all PubSub functionality built-in that we needed.
If you navigate to one of our tools on a website https://hexometer.com/ssl-certificate-checker and open Networking tab in dev tools, you will see Websocket connection to our API instance. In this case, our Node.js API is just a middleware between API -> Nats -> Website Scan Tool
, this allowed us to scale without keeping any session-based logic somewhere, we just using JWT to transfer context-based information over services, and Nats is actually the main communication hub for all of our services.
Thanks for reading the article.
We haven't done any specific benchmarking on multiple cloud providers, but we started with Dedicated servers then quickly moved to Google Cloud Kubernetes. The difference was significantly bad :/ but in the scope of this article (Graphql Subscriptions), our main load is coming on Nginx containers/pods because that's the single point of keeping connections alive and transferring all the data across the infrastructure.
Based on our experience, our single Node.js GraphQL API container/pod can handle around 10K live connections with 1 Core CPU and about 4GB of Ram, which is in avg about 300Kb - 400Kb per single WebSocket connection. Ideally, you will have a lot less per connection if you have fewer connections, but because of Node.js GC, in most of the cases, intensive network IO is not allowing to clean up memory very often.
So we ended up having 6 instances for our Kubernetes cluster (scaling up if needed) with 4 Core CPU and 50Gb of memory. You probably will say that 4 CPU is a way to low, BUT it turns out that keeping CPU resources with an automatic Kubernetes configuration freed up our energy to think about how many CPU cores we need, if we are getting an alert that CPU load is too high, we just putting another instance.
Costs are about 3K to 5K per month, based on our load. BUT most likely we will move out of the cloud next year, because having 10X more resources on dedicated servers (rented) is just about 2K-3K a month, and performance is way better. From our experience, Google cloud feels like a very slow on CPU based tasks, we were getting more out of our dedicated server CPU core than Google cloud ones.
Anyway, we still optimizing stuff, because based on the nature of our product hexometer.com we have a lot of CPU and Network-intensive operations.