Several points in the sqlite docs seem to suggest that may not be the best fit:
> If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite
Check. I have about 10 boxes that would need access, and the number continues to grow
> But if you website is so busy that you are thinking of splitting the database component off onto a separate machine, then you should definitely consider using an enterprise-class client/server database engine instead of SQLite.
My databases are already on dedicated servers
> there are some applications that require more concurrency, and those applications may need to seek a different solution.
Definitely a little scary.
There's not enough information in your question. What are your constraints? Do you have some nosql database and need to use it? What it is? How accurate does the count have to be (not really accurate/eventually consistent/even better)? How large is your large scale? Are you writing server-side software which you control, or is it running e.g. on a mobile device? Why 1 document, would reading 10 documents be ok?
If you don't have a specific database but really want it to be nosql then just use redis - call INCR on a key and you have a counter.
In case of firebase: https://firebase.google.com/docs/firestore/solutions/counters
One interesting thing is: https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type#G-Counter_(Grow-only_Counter)
Here's a comparison of the two written by the Couchbase people.
And here is a comparison of the two written by the MongoDB people.
Somewhere between the two is the truth.
But this is a good case study in computer architecture, isn't it? Two teams have implemented document-based data stores. That means something -- that they associate a parseable document with a key -- and it implies certain access patterns. (Couchbase is also kind of multi-model, so ...)
At the next layer down, though, they diverge because they decided to implement different high-level features (or not). Most notably: the way that sharding is handled, and the way that ACID is implemented.
And at a layer below that, they're wildly different in implementation-level tradeoffs. Different implementation languages, different file formats, different distributed systems patterns.
What did those differening decisions really mean in terms of things that users can perceieve? What scenarios are blocked or enabled? That's really what you're asking, and it's a pretty difficult question to answer concretely.
They are also marketing this: https://www.mongodb.com/presentations/leading-quantitative-investment-firm-man-ahl-improves-throughput-25x-with-mongodb?utm_campaign=T5_V3_DEV_IT_E2_AHL_Case_Study_B&utm_medium=email&utm_source=Eloqua
Not much context in it, but worth investigating imo
Postgres supports asynchronous commits, which don't wait for the data to be flushed to disk. The setting can be set on a per-transaction basis. The logs will be fsynced at the normal rate, but your writes will not wait for the writes to hit the disk. It's even possible to disable fsync entirely.
Adding a queuing solution (eg., RabbitMQ) as a layer between the app and the database is always a good idea if you want to burst as many writes as possible, as long as it's not important that writes are immediately visible.