Popular, well designed standard library, fast compiling, lightweight, binary-to-deploy, Google-backed...
I always go to Go when writing small projects, at the same time sadly remembering the time I thought Ocaml was going to be this popular, 10 years ago.
A big product from Xooglers: https://www.cockroachlabs.com/blog/cockroachdb-1-0-release/
So in some way Go is so crippled, in another way so many cool projects are written using it and not Python/Java/C#/Haskell...
These implementations don't quite work, as described in the issue. This will round 0.5 to 1 (and -0.5 to -1) which probably isn't what's intended.
This is probably why the proposal was accepted. There are lots of edge cases and it's easy to have mostly working versions which are still subtly broken.
Edit: See https://www.cockroachlabs.com/blog/rouding-implementations-in-go/ for more info. My explanation wasn't completely right. This implementation will also round things like 0.499999999999999999999994 to 5 as well.
It's explained right in the article under the section The Motivation. It's a play off of the title of a similar article by CockroachDB called Why we built CockroachDB on top of RocksDB.
>The Motivation
>
>Recently, our friends at CockroachDB wrote about why they use RocksDB.
​
Cloud Spanner has a high entry point. A single Node will cost you $650 per month and Google recommends at least three. So your project has to be big enough to justify this price tag. If you don’t mind to self host there is an open source alternative to spanner: https://www.cockroachlabs.com My Team and i faced exactly this problem. We wanted an SQL database that is scalable and resilient. But the entry point and the fact that it would alter out local development flow, drove us away from Spanner and we went with cockroachedb (pretty happy till now)
It seems that your comment contains 1 or more links that are hard to tap for mobile users. I will extend those so they're easier for our sausage fingers to click!
Here is link number 1 - Previous text "1"
^Please ^PM ^/u/eganwall ^with ^issues ^or ^feedback! ^| ^Delete
I looked at using it for storing session data a while back because it appears to be ACID compliant [1]. The replication works quite well, but it never forgets nodes so you can't delete a dead node without recreating the cluster which wouldn't work in our environment (I can't create a bunch of DNS entries in advance, and not being able to reuse existing ones was a showstopper).
> I see no problem with this.
That tends to make distribution way more complex, and for some targets it can be a terrible idea (e.g. FFI in Go with the default toolchain requires using cgo and has a huge overhead (larger than the overhead of calling a function in CPython, which is known not to be fast))
I don't wanna be an ignorant fck but this can be achievable by CockroachDB? AWS does not support it as a managed service, the only thing is that actually would be a problem is to manage it yourself, as it is achievable with EKS (for Kubernetes) and deploy multiple clusters across the globe and put the AWS accelerator on top of it to weight latency to the closest location.
Speaking of moving master and replicas closer, CockroachDB achieves that. It's called multi-active: https://www.cockroachlabs.com/docs/stable/multi-active-availability.html
It was built with some code borrowed from postgres.
Edit: in terms of cost, idk 乁( •_• )ㄏ when u want global availability and low latency, expect some good money to be paid
For Jarrett Farnitano
How has JS in space worked out on Crew Dragon? Were there any changes to the UI/UX the team has made due to in-flight experience?
For Kristine Huang. (CockroachDB)[https://www.cockroachlabs.com/customers/] lists Starlink as a customer. What made you choose a distributed SQL DB over other Big Data datastores?
For Jeanette Miranda. How much of the challenge of laser satellite interlinks is a software problem vs a hardware problem? Is there a lot of crossover between terrestrial telecom fiber experience and space-bases laser comms?
For Asher Dunn
What software/hardware design lessons from Dragon are being brought over for Starship? Is there anything particularly exciting your team gets to implement on Starship that wasn't possible for Crew Dragon?
For Natalie Morris
How does regression testing work on Starlink? Are most on-orbit failures able to be diagnosed and added to the testing regime?
I'm hoping for something like CockroachDB, except with usage-based pricing. Google's got Cloud Spanner, but it has node-based pricing. I'd definitely like to see AWS leapfrog them on that.
If I understand this post correctly, they sacrifice speed when in doubt. The more accurate clocks you throw at the problem, the better it gets.
To provide some perspective. They are trying to solve a problem that is incredibly difficult.
They are trying to create a datastore system that can expand horizontally, keeps ACID (atomicity, consistency, isolation, durability), and does it quickly. The quick part is the real kicker. You have to be quick in order to support twitch reactions in a game with pvp capabilities.
They seem to be solving this with a caching layer, but keeping everything on the same page in a distributed system and keeping it quick is still hard, and I do not envy them their task.
CockroachDB (a database that is designed to be distributed) has written about this kind of stuff before. It's an interesting read if you've a comp-sci background.
https://www.cockroachlabs.com/blog/transaction-pipelining/
To clarify, when I say quick. I mean in the order of milliseconds. 20ms is ideal, but it quickly drops off. 100ms is poor. MMO games become unplayable as the number of milliseconds goes up. This is what most people call ping.
There actually is a need for this, and it's generally most important for serious distributed systems that want high performance but also a reasonable level of consistency. I'd try to explain my understanding of it but I'd probably get it wrong, so here's a blog article:
https://www.cockroachlabs.com/blog/living-without-atomic-clocks/
I've considered Postgres a few times, but when I try to dig in, it's only really cheaper if you've got some existing VM's you can host it on. Otherwise, the SQL Server PaaS instances end up being more or less the same price as finding some hosting for your Postgres instance anyways.
I'm sure there are breakpoints where Postgres gets cheaper as your project gets larger, I just haven't really run into them where it matters.
​
I've got on my list to check out Cockroach DB - they're Postgres-compatible, designed for a distributed massive-scale (probably more than you need, but hey why not), and offering a free-forever tier to get people interested. I'm hopeful this could turn into a nice free option for starter projects. (I still have not actually tried this, so can't comment on its quality.)
That's part of the appeal of the PostreSQL-compatible SQL API. https://www.cockroachlabs.com/docs/v21.1/postgresql-compatibility "CockroachDB is wire-compatible with PostgreSQL 13 and works with the majority of PostgreSQL database tools such as Dbeaver, Intellij, pgdump and so on. Consult this link https://www.cockroachlabs.com/docs/v21.1/third-party-database-tools for a full list of supported third-party database tools. CockroachDB also works with most PostgreSQL drivers and ORMs."
Hi! I work on CockroachDB. That's a great question! The quick answer is that we use distributed consensus algorithms (specifically Raft to keep the data consistent.
There is some more info in our FAQ pages. See the questions about the CAP theorem and distributed transactions. There's also a lot of really deep dives in our tech blog posts, some of which are outside the area that I work on, but I'm happy to talk about it more with you!
^ here's the blog write-up on that project: https://www.cockroachlabs.com/blog/how-we-built-a-vectorized-execution-engine/
It's true that people avoid placing their data (state) in Kubernetes, but that's changing. The general rule of Kubernetes is: as long as you can survive sudden pod crashes without losing data, you should be ok.
Anyway, have you looked into Vitess? It's a project that aims to bring MySQL to this new cloud native world:
Another project to check is CockroachDB, for people who prefer a PostgreSQL-like solutions:
Here are some products worth looking into for your use case:
Hope that helps.
First, currently you are not aware of all of the concerns that you have and real-life pushes things to be less symmetrical.
e.g. you need to have a "X" that acts as a repository, service and handler at the same time, where would you put such a thing. Trying to make it symmetrical (i.e. breaking it up) could require writing additional ~3-6 interfaces between those pieces, but at the same time you lose comprehension on the wholeness of "X", itself.
Facilitating an example; DB management front-end... the DB management is deeply tied to the particular DB and the front-end needs to deeply reflect what the DB is doing e.g. what knobs you have to control DB cache sizes. By introducing an interface between the handler and repo implementation and placing them into different packages you are hiding this deep-interlock... and also introducing (probably) unnecessary types in the process.
Of course, I wouldn't expect to start seeing such problems until you have at least 5 big-ideas interacting. e.g. todo list, white-board, goals, administration, batch-renaming.
You can also see something similar here https://www.cockroachlabs.com/blog/outsmarting-go-dependencies-testing-code/... i.e. the testing required to break the import DAG.
Of course, my point isn't that you won't have service/handler/repo, but rather that by placing them in the same package it's possible to break the symmetry, of course if necessary. Essentially you would have package todo
and types todo.Service
, todo.Server
, todo.Repo
, todo.Item
etc.
Btw. this can also help with the bloating of main. e.g. the "todo" package can setup all the "todo/list", "todo/admin" end points rather than main.
To provide some perspective. They are trying to solve a problem that is incredibly difficult.
They are trying to create a datastore system that can expand horizontally, keeps ACID (atomicity, consistency, isolation, durability), and does it quickly. The quick part is the real kicker. You have to be quick in order to support twitch reactions in a game with pvp capabilities.
They seem to be solving this with a caching layer, but keeping everything on the same page in a distributed system and keeping it quick is still hard, and I do not envy them their task.
CockroachDB (a database that is designed to be distributed) has written about this kind of stuff before. It's an interesting read if you've a comp-sci background.
https://www.cockroachlabs.com/blog/transaction-pipelining/
And to clarify what I mean by quick. I mean in the order of milliseconds. 20ms is ideal, but it quickly drops off. 100ms is poor. MMO games become unplayable as the number of milliseconds goes up. This is what most people call ping.
I think you are thinking of RocksDB, which was the storage engine for CockroachDB for a long time. IIRC it is written in C++ by Facebook.
https://www.cockroachlabs.com/blog/cockroachdb-on-rocksd/
It seems like you are mostly interested in learning as much as possible as efficiently as possible, so I'd pick Go. C++ would make sense if you trying to fine tune storage medium or integrate with the underlying OS with minimal overhead but it seems like you are mostly interested in replication and ACID compliance, which are things that I feel Go excels at.
Also if you are looking for more of a guided tour of making your own database, CMU has a great class and associated project: https://15445.courses.cs.cmu.edu/fall2020/assignments.html
Not direct access, but there are some concessions made for OLAP workloads: https://www.cockroachlabs.com/docs/stable/vectorized-execution.html.
From the most pragmatic point of view possible: Go’s interop with C-based libraries might be the death knell of this. Calls into C functions are orders of magnitude slower than the equivalent calls from C or C++. For most things, say calling a database, that kind of overhead is a minimal fraction of the overall elapses time, but in scenarios like this where you are building really complex objects by making tons of calls several times per second... might be enough to make it impractical.
From a learning perspective, you should totally do it. Go is like a much saner C++, with very strong opinions, some cool concepts and one fifth of the features.
This is a surprising result. I mean Go is calling the system library which is in C. That high overhead would be visible.
Edit: apparently the overhead is confirmed. But before I learn more about it, let me say that the overhead should not be relevant and perceptible for a GUI application. Here is a blog post about this issue: https://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/.
Depends a lot on the needs of the application, how you manager your storage, etc.
In Kuberentes land, there are volume concepts that allow you to connect storage to containers. It could be NFS, filesystem formatted block devices, local volumes.
In some applications, storage is a database, so there's nothing local there.
But the database usually requires some kind of local filesystem. In this case we usually map in a volume mount, either locally on disk, or some kind of RBD device with a filesystem formatted.
Other applications need a shared storage space, so we map an NFS/CIFS mount into the container.
Sometimes we build the storage into the container ecosystem, like with Rook, you can build your own virtual SAN over your Kubernetes cluster.
Or, if you're doing large scale databases, you might have something like Cassandra, or CockroachDB. Those get local filesystem volumes from the host mounted into the container.
CockroachDB won't currently remove a node for being slow; it's up to the operator to monitor this and decommission the node if appropriate. Nodes are only removed automatically if they are completely down. See the FAQ for more on our failure recovery processes.
Hey! I work at Cockroach and I think we would fit exactly what you're looking for.
> * to have a distributed database solution in place which is horizontally scalable
That's exactly what we do.
> * Easy to add nodes, remove nodes without any downtime (sure I can accomodate some write-locks for setup)
No write locks needed. Nodes can be added and removed easily. We even sure that all upgrades can be rolling upgrades. More than that, we even have online schema changes.
> * Have the ability to tweat replication factor. Ideally I would love to have replication = number of nodes i.e. each node has complete database. So that when there are simple queries, it doesnt have to do distributed queries (which make system slow) and when the query is a bit complex, it does distributed since data is available in each node.
We do both local and distributed queries, but it does depend on how you setup your tables. We offer interleaved tables to ensure that child tables are in the same range and don't require a distributed query. And you can set the replication factor (and replica locations) based on Database, Table, and even row based.
> * Best case: Some GIS based plugin / extension on the solution would be icing on the cake.
Sadly not yet. But what exactly are you looking for? We've had a few requests for geo-spactial .
> * SQL compatible, so that least of the application rewrite is required.
Cockroach speaks the postgres wire protocol. So the SQL you know already should mostly already work.
If you have any questions, I'd be happy to answer them.
Rebuilding sharding from first principles
/uj Just throw more hardware at it. And if you can't scale vertically anymore - migrate to dbms that manages shards out of the box. Like Cockroachdb
Probably because it was only publicized recently (this fall). Cockroach needed a non cgo replacement for rocksDB.
Have a really in-depth blog post about it: https://www.cockroachlabs.com/blog/pebble-rocksdb-kv-store/
you clearly don't know anything about CockroachDB, their requirements are only 5K/s now take a look at CockroachDB:
https://www.cockroachlabs.com/docs/stable/performance.html
That is way way way more performance then they'll ever need.
Elastic is def. good for doing search queries, especially full text searches. And it also scales well.
But if you're looking for a highly scalable relational db then you can check YugaByteDB/CockroachDB (=both based upon PostGreSql) would also do the job well.
Assuming that the only state that needs to be replicated is in Postgres, you could use something like flux to ensure that the services and their configs are synced up between the two clusters.
No matter how you cut it, my guess is that the most difficult aspect will be syncing up your data. You could into whether something like CockroachDB works with Wireguard. That'd help you simplify a good bit.
Depending on the features needed from PostgreSQL, one can use CockroachDB Serverless (compatibility) instead of Cloud SQL.
FYI / Maybe interesting. CockroachDB has only SERIALIZABLE isolation level and has SELECT FOR UPDATE. Worth reading their docs on that: https://www.cockroachlabs.com/docs/stable/select-for-update.html
TLDR: Helps by ordering transactions avoiding retries and 'thrashing'
To provide some perspective. They are trying to solve a problem that is incredibly difficult.
They are trying to create a datastore system that can expand horizontally, keeps ACID (atomicity, consistency, isolation, durability), and does it quickly. The quick part is the real kicker. You have to be quick in order to support twitch reactions in a game with pvp capabilities.
They seem to be solving this with a caching layer, but keeping everything on the same page in a distributed system and keeping it quick is still hard, and I do not envy them their task.
CockroachDB (a database that is designed to be distributed) has written about this kind of stuff before. It's an interesting read if you've a comp-sci background.
>Why would they do that? For load balancing?
There are a lot of reasons, but this is probably the biggest one. It also let's you restart a container without losing uptime
Some applications also need redundant storage. Cockroach db (https://www.cockroachlabs.com/blog/running-cockroachdb-on-kubernetes/) for example
I believe it. old coworker who formerly worked at AWS told me for example that DynamoDB is MySQL at the very bottom layer, but being used as a key-value store instead of full SQL, and then a bunch of Java on top of that to handle the API details and replication.
if you look at the architecture of an open-source project like cockroach it's pretty similar, they use RocksDB as the low-level storage layer and build a higher layer abstraction on top of it.
For me, if you look back to when Redis has been designed - 11 years ago, it was before the Cloud was a thing. Since then, you have Cloud alternatives that are mostly proprietary. The idea of RedisLess is not competing against a product that is existing for 11 years but showing a new path of how we can build a system on top of an existing one. You can see RedisLess as experimental. How to build Cloud-native databases by taking advantage of existing solutions? TiDB, Yugabyte, CockroachDB are great examples of being MySQL wire protocol compatible and providing a Cloud way of managing data.
​
>Starting a TCP server like this is an enormous security vulnerability. Anybody that uses it is exposing their application's memory store to the world at large. Anyone with a Redis client can modify your application's memory at will. All they need to do is find out the IP of your machine. I really can't stress enough how bad that is.
Exposing the server to the local network is not exposing the server on the internet. The same precaution principle can be applied to Redis Server and any network exposed services. So it's not specific to RedisLess.
Postgres async notification with LISTEN/NOTIFY sounds much closer to what you want. I would recommend checking that out.
I would take a close look at 'core changefeeds' though; they may not require a message bus and instead stream directly to the client.
There are a couple ways... Either you include an Last updated column in the db that's updated on changed and just select on everything since the last time you queried.. so.. more eventually consistent.. with a lag of the poll time...
Or ca. Capture more streaming from the db with something like https://www.cockroachlabs.com/docs/v20.2/stream-data-out-of-cockroachdb-using-changefeeds.html
Or.. you make an event stream/message queue the source of the update that gets popped/updated from both sides
Normally, I would recommend looking into Postgres triggers and LISTEN/NOTIFY, which serve exactly this use case and which I've integrated successfully into a number of Go projects. Unfortunately, even though cockroachdb uses the Postgres protocol, it doesn't support those specific features. If switching to Postgres is out of the question, it looks like CockroachDB has a similar feature. Looking up "changefeeds" or "change data capture" are helpful keywords to search for when looking up this kind of feature.
This blog has links to the big three's sustainability statements https://www.cockroachlabs.com/blog/the-ethical-cloud/
From my reading Microsoft's is far and away the most ambitious. They pledge to be carbon negative by 2030 and have removed all carbon they ever emitted since 1975 by 2050 using a combination of tree planting and technology. They're also investing $1bn in developing that technology.
I think it's interesting and when you say high available i think about cockroach db https://www.cockroachlabs.com/ or citus https://www.citusdata.com/solutions/infrastructure/high-availability-postgresql which both should be drop in for postgres.
What i think is missing from your solution compared to thw other two is some kind of automatic fail over -- it seems like an interactive backup that can serve get requests and i assume not post or patch.
I assume you could do automatic failover with a load balancer or proxy with health checks but your site will still be in this semi failed state where it is read only.
The real strength of your proposal is it's simplicity. To use any of the two in actual high available mode i seem to recall that you need st least 3 instances (preferably at different sites to not die during power failure) but this is expensive and worth considering if you need
Your suggestion does not work for Kubernetes, I will copy the answer of a person far smarter than me:
>You might have noticed that so far I've been using the terms "multi-region" and "multi-cluster" essentially interchangeably. Kubernetes is not designed to support a single cluster that spans multiple regions on the wide area network. For quite a while, it wasn't even recommended to have a single cluster span multiple availability zones within a region. The community fought for that capability, and now it is a recommended configuration called a "multi-zone cluster".
>
>But running a single Kubernetes cluster that spans regions is definitely done at your own risk. I don't know anyone who would recommend it. So I'm going to keep using these terms - “multi-cluster” and “multi-region” - mostly interchangeably. If you want to run something like CockroachDB across multiple regions, you are necessarily going to have multiple Kubernetes clusters, at least one in each region.
Source: https://www.cockroachlabs.com/blog/experience-report-running-across-multiple-kubernetes-clusters/
I don't know nowadays, but I used Gentoo a long time ago and I managed to break glibc so essentially no dynamically linked program worked.
While I was unhappy a lot, learning how to fix this was a very educational experience!
So I do recommend bleeding edge distributions for that reason: it'll break. You might have at least look for a solution or you have to make one yourself. Great experience! Also great productivity killer, which is why I stopped using Gentoo.
Debian is definitely more the boring-but-works environment. It allows me to look more cutting edge software (e.g. CockroachDB) and learn about that, not having to worry too much about the OS layer.
The list at https://www.cockroachlabs.com/docs/stable/known-limitations.html is a good starting point for anyone considering it. As /u/badtux99 points out it is definitely not a zero effort process to move to it.
The number of postgresql wire compatible DBs seem to be growing, but I think one needs to take a detailed look at what are the compatibility issues one will phase, the level of effort to convert existing systems (if any) and what would one be giving up vs just using Postgresql.
CockroachLabs is looking for a Senior Writer, and it's full remote, so you could live where you wanted in US/Canada. https://www.cockroachlabs.com/careers/job/?gh_jid=2180996
The answer will be different for each database, since different architectures will have different tradeoffs.
CockroachDB (disclaimer: I work there) is more like the second pattern you asked about -- one node would receive the insert query.
A good place to look is the docs for the database you're interested in. Here are the docs for CockroachDB.
Dude. Even Oracle doesn't handle writes to the same record in multiple instances at the same time without deadlocking and aborting multiple transactions. You'll need to serialize your read-modify-write transactions with either internal or external locking to make that puppy cook right.
But yes, CockroachDB does full ACID transactions including detecting and serializing attempts to write the same row from multiple transactions. https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/
There are two ways to use it:
Both are available here: https://www.cockroachlabs.com/get-cockroachdb/
This is pretty cool I'll definitely try it. The docs support django python 3 https://www.cockroachlabs.com/docs/stable/build-a-python-app-with-cockroachdb-django.html
What advantages does your package have over the documented version?
How do you feel about this?
https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/
We're aware that GPL is not a perfect fit so we've been watching stuff like that. I'd happily go BSD or MIT if we had enough other revenue in place and if I knew we weren't just donating free labor to Amazon. It really is unfair for cloud SaaS mega-corp vendors to just monetize OSS without contributing anything back.
I'm interested in centralizing certificate management where possible. I have a typical setup, with applications that use client certificates to access clustered servers, and those clustered servers use TLS to communicate amongst themselves. I've seen some applications use Kubernetes' built-in CA and CSR tooling (see https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#step-2-start-cockroachdb for an example). Is this considered a best practice or is this something done out of convenience?
Hey /u/TheProffalken,
The best place to ask this would be on the official forms. You'll get a response quickly there: https://forum.cockroachlabs.com/
And you're very right that one should never use root for anything. I'd suggest going with a secure deployment and using certs instead of passwords. But that's up to you.
Here's our basic documentation on user creation: https://www.cockroachlabs.com/docs/v19.1/create-and-manage-users.html
Let me know if that helps or if you don't get a reply in the forums.
If you're at such a scale that it is required to shard your data, I would not attempt to do this in code, at all. Take a look at Cockroach DB. https://www.cockroachlabs.com/
If you need 100% Postgres compatibility, CockroachDB does not seem to be an option:
From: https://www.cockroachlabs.com/docs/stable/porting-postgres.html
>Although CockroachDB supports PostgreSQL syntax and drivers, it does not offer exact compatibility
​
https://www.cockroachlabs.com/blog/multi-cloud-deployment/ https://www.datastax.com/dev/blog/multi-datacenter-replication
In both you'd use service discovery, DNS with application environment variables in each datacenter to discover the local DB servers. In Cassandra you also tell it to limit connections to the local DC by setting a policy in the driver.
For option3, you typically have an abstraction layer over two connections, one for the writes one for the reads. Depending on which request gets made your application code would need to route the query to the correct DB connection.
Yes, using --advertise-port=26258
should make the third case work, although this is not widely used or well tested.
However, I don't think this is how bridge networks are supposed to be used. In a bridge network, each container gets its own IP. You should connect to those IPs instead of going through the docker daemon's port remapping on the host. You might remap one port to make it accessible to the host, but you don't need to map all of them.
Personally I feel like bridge networks add a lot of complexity for little value. I'd recommend either using host networking for simplicity or overlay networks when you need more complex routing (and use kubernetes or docker swarm to manage that overlay network). We have docs on kubernetes and docker swarm which I'd highly recommend instead of trying to set up a cluster with docker by hand (if you want to do it by hand, I'd stay away from docker).
Please link to the original code when you post. Also note that there are problems with this code, and worse, it's not what you really want here. You should read this.
I mentioned this at the bottom of the article, but the Cockroach Labs folks did a nice performance analysis that's worth reading if you are thinking about Cgo performance: https://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/
My comment wasn't completely accurate... This post goes over some of the specifics (and how this algorithm falls short, along with most others mentioned)
https://www.cockroachlabs.com/blog/rouding-implementations-in-go/
For those interested, Cockroach DB whose version one stable just got released lately, is taking the same approach as Spanner and is open source. In fact, it was started by a few folks from Google who worked on Spanner and other related tech.
I know Spanner uses atomic clocks to get tight time bounds on transactions. Since CockroachDB doesn't use atomic clocks they have to use a slightly different approach:
"While Spanner provides linearizability, CockroachDB’s external consistency guarantee is by default only serializability, though with some features that can help bridge the gap in practice."
"A simple statement of the contrast between Spanner and CockroachDB would be: Spanner always waits on writes for a short interval, whereas CockroachDB sometimes waits on reads for a longer interval."
https://www.cockroachlabs.com/blog/living-without-atomic-clocks/
Try and read about CockroachDB. It's working great for my company so far https://www.cockroachlabs.com/ . It's pretty much SQL (no joins yet, but they're working on it), but with NoSQL type of scaling. Awsome!