Cloud Spanner has a high entry point. A single Node will cost you $650 per month and Google recommends at least three. So your project has to be big enough to justify this price tag. If you don’t mind to self host there is an open source alternative to spanner: https://www.cockroachlabs.com My Team and i faced exactly this problem. We wanted an SQL database that is scalable and resilient. But the entry point and the fact that it would alter out local development flow, drove us away from Spanner and we went with cockroachedb (pretty happy till now)
Add a scheduled Cloud Function. This function checks the price every X minutes and sends it to you.
It's basically free - since the first two million rubs are free every month. If you are not experienced with JavaScript and want to stick to python than the same logic applies to Google Cloud Run.
The easiest start with Functions is via Firebase Functions.
Here are tutorials which are basically exactly the business logic you need:
You're going to want a Firestore trigger (onCreate) that watches for new users (represented as documents). Then in the code handling the trigger (i.e. cloud function) you'll want something that works with the GKE API to spin up a new pod. A new pod per user sounds expensive to me but I have no idea what your use case is so maybe it's reasonable.
It simply won't work... there's no persistent storage.
I've used a free Mongodb atlas cloud instance with Cloud run: https://www.mongodb.com/cloud/atlas and that has worked fine (for a hobbyist thing).
Google Cloud Platform is a development tool and hosting platform for software developers. The documentation and tutorials explain how to use GCP, but they assume that you already have experience as a developer and generally know what you're looking to build.
If you've gone through dozens of hours of tutorials, but you're getting frustrated because you're not any closer to achieving your objective, that's a sign that you're not the intended audience for GCP and that you should fundamentally rethink your approach.
Based on your description:
> All I am trying to do is set up a web app that I can use as an interface for my Google spreadsheets data, because I have a lot of data on there as well as button and functions and graphs and I wanted to set up a console that I can use to easily click through different categories of information and do certain operations like sending generated emails.... I am coming to my wits end with this service but I really am determined to make this work so I can make running my business much easier.
... you should probably look for a "no-code" tool, rather than trying to build something in GCP directly.
Google's no-code solution is called AppSheet. While I have very limited experience in this space, my colleagues who have used these types of tools have also spoken highly of Retool, although feedback that I've heard from them and others is that it can be quite expensive. Reddit also has an /r/nocode sub that you might find helpful.
In any case, based on your post and what you're looking to accomplish, I don't think working further with lower-level GCP components is going to be productive.
You should look at Firestore Security Rules: https://firebase.google.com/docs/firestore/security/get-started
It uses Firebase Auth instead of IAM. I don't think it's possible to lock down specific collections via IAM.
This also means you need to use the client libraries vs the server-side admin libraries.
I recommend using the Free Trial after you graduate. For classwork, recommend your professor go to http://cloud.google.com/edu to apply for teaching credits so all students in the class (and the professor and TAs) can use GCP for free. Also, the Free Trial requires your credit card but not the teaching credits.
> Now I want to sftp/transfer files from/to the pvc volume.
Are you trying to access/transfer files that are used/created by a specific container? I think you would need to set up a dedicated pod with sftp to access the mounted files.
> How can I ensure wordpress always gets the correct machine?
Take a look at node affinity https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
> So I have manually set the pod to small
Are you using Deployments
and you've manually set a Pod
of that Deployment
to "small"?
If so, when that Deployment
's pods get recreated, they will revert back to their original definition.
Just SSH into VM and install certbot. https://certbot.eff.org/lets-encrypt/ubuntuxenial-nginx.html. For beginners I recommend using cloudflare and you can do it easily.
Nice one!
Here are a couple of additional suggestions that some folks might find useful to add onto this...
Specify size or max-height/width from url-params
Add a CDN front-end that caches repeated requests to provide virtually instant response and potentially improved cost efficiency at scale
Okay got it. Also, mobile vision has the face recognition thing. Thanks a lot. Also, my question at SO: https://stackoverflow.com/questions/44091577/what-is-the-difference-between-google-cloud-vision-api-and-mobile-vision
AFAIK that isn't possible when using the classic load balancer with backend cloud storage buckets. I don't think it is possible with the new one either.
What you could do is host your static files with Google App Engine standard environment and you can then use the app.yaml configuration to do some redirects. [0]
If you don't need the full blown capabilities of a load balancer you could also look into Firebase Hosting which supports Cloud Run backends for specific URL paths. [1]
[0]: https://cloud.google.com/appengine/docs/standard/python3/serving-static-files#serving_from_your_application [1]: https://firebase.google.com/docs/hosting/cloud-run
Yeah, port forwarding is a kind of NAT. Try searching for iptables port forwarding.
Also, your hypervisor might have some specific setup port forwading setup. For example, here is VirtualBox's guide.
If you've enabled the Firebase Management API before, this is what is causing this behavior. You can disable that API, then go to the Firebase Console and it will appear there. The reason is that once that this API is enabled it is assumed A firebase project is already adeed for that GCP project. Also instead of disabling the api, you can add your project using https://firebase.google.com/docs/projects/api/reference/rest/v1beta1/projects/addFirebase
Unsure of if this works for any of the GCP products you are using, but Firebase App Check attempts to ensure requests are only coming from your own clients and not outside sources.
Check out Firestore live queries. Document per delivery. Updating the doc will auto push updates to app client with built in authz for app users. https://firebase.google.com/docs/firestore/query-data/listen . Note that Firestore is a joint GCP and Firebase DB, docs a little better on the FB side.
The small bit that you're missing here is that you can mount those credentials as a file inside the running docker container, without having to bake them in the image. If you use GKE/Kubernetes, take a look at https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
3 is basically following this. See also the kubernetes docs on persistent volumes.
> Along the way GCP bought Firebase which had it's own nosql store, Firestore which did not have the limitations above.
FWIW my understanding of the story is that Firestore was created with Google resources, and the acquired database was "Firebase Realtime Database", which is now also being deprecated in favor of Firestore. https://firebase.google.com/docs/database/rtdb-vs-firestore
I'm not exactly familiar with App Engine -- I normally work with Compute Engine or Kubernetes Engine. Here's my best guess:
Assuming Flex environment (since you're using Node.js and the standard environment is in beta), your frontend and backend are considered 2 separate apps, so at least 1 instance each.
I'm not sure what you do on the backend and whether it can be replaced with Cloud Storage. Cloud Storage is mostly used for file storage (think of it as a infinite Google Drive). Maybe Firestore from Firebase might fit what you're looking for? You would probably still need Cloud Storage to store your content images though.
From my estimation your cost is $88.44 per month:
IMO, I think 10k-20k pageview/mo is very small. I used to serve 127k pageview/mo on a shared VPS instance without breaking a sweat. If you're looking for something in the 5-15 USD, try using g1-small or f1-micro instance size. You could easily upgrade your instance size later.
You COULD use Oauth, but service accounts are definitely the right way to do it.
You ask your users to create a new service account and give it the minimum permissions you need to get your work done.
For example, here is how Segment gets access to write into BigQuery: https://segment.com/docs/connections/storage/catalog/bigquery/
Hey Wessxx!
This book is outdated, but still an amazing source to get you started. You can easily go over a chapter a day (there are 12 chapters). Each chapter also provides about +10 chapter questions to test your gained knowledge: https://www.amazon.com/Official-Google-Certified-Professional-Engineer/dp/1119618436
Another great resource would be the Data Engineering course from A Cloud Guru. Especially the hands-on labs will help you started.
Generally, I would request a small budget from your employer to set up a Google Cloud environment where you could experiment. In my experience, your bread and butter will be: BigQuery, Dataflow, Pub/Sub, Cloud Run, Cloud Functions, Cloud Storage, ...
Enjoy and have fun!
Do you use Firebase Auth? In that case, Cloud Storage for Firebase could help. Use "Group private" conditions at the storage security rules with a custom claim on the ID token, e.g. "role=premium".
https://firebase.google.com/docs/storage/security/rules-conditions#group\_private
you can use a temporary(burner) credit card using privacy.com which uses bank account - and set the limit to 1$. you can delete the card whenever - i usually use this method to sign up for trial programs that i know i dont want to convert so even if i forget to cancel, it wont be autorenewed :)
load balancer is used to do exactly that.. expose a public https endpoint and balance the incoming traffic across multiple web servers on the backend for scalability (which you don't really need). The application load balancer also terminates a HTTPS connection decrypting the HTTPS traffic and then allows passing HTTP traffic to your backend Node application listening on port 3333. You will need to provide a SSL certificate to the Application Load Balancer which can be generated for free using Let's Encrypt as long as you own a domain name. https://letsencrypt.org/
Depending on the features needed from PostgreSQL, one can use CockroachDB Serverless (compatibility) instead of Cloud SQL.
Ah no. Yeah check it out super easy and quickly set up. Also I think you want pay almost nothing only storage and network. Check out the pricing but I doubt you go over the free tier:
You should be able to, but you can also just start playing with it using the real thing - Firestore even has a fairly generous free use tier, see here:
https://firebase.google.com/docs/firestore/quotas#free-quota
This is one of the reasons I suggested using it, since other storage options (Big Table, Cloud SQL, etc.) don't have a comparable offer. Firestore is also a schemaless document store, so you have a lot of flexibility in how you want to develop your application further.
It feels like there is an API to perform a Firestore database import. See the following:
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases/importDocuments
When we have an API, we can then invoke that from a Cloud Function or other environment.
https://firebase.google.com/pricing
Looks like 12c per GB of bandwidth used. I don't know about Firebase, but as it is theoretically the same service underneath, you could set a Budget Alert in GCP.
You'll get charged peanuts. Worst case if you approach your quota, just start a new project.
There's an official Firebase extension that makes the whole process easier.
You can install it to keep Firestore Collections automatically synced to a BigQuery dataset, or just use the backfill script for one off imports.
https://firebase.google.com/products/extensions/firestore-bigquery-export
I've done this with VS Code but should be approx the same
The two major gotchas I know about is
1) Putty generates keys in a format that isn't compatible with anything except putty - see convert thing in putty gen
2) Windows (generally) expects the pvt keys to be in a file called id_rsa (no extension) in C:\user\yourname.ssh
When you join and get the Google Calendar link to add to your calendar, let me know if the event doesn't give you the correct local time.
I like using the Google Calendar feature to view multiple time zones but also worldtimebuddy.com I use to view up to 4 zones at a time.
Central is UTC-05:00
BST is UTC+01:00
https://www.cloudflare.com/teams/access/
It lets you set up a login page with a whitelist or whatever you like in front of a domain, subdomain or path, so users have to log in to access it. And it looks like they made it free for up to 50 users now rather than just 5.
Google Auth can be set up as one of the providers.
There are record types card NS or Nameservers. This basically tells the world where to look for the rest of your DNS. Cloudflare has some great material on this.
https://www.namecheap.com/support/knowledgebase/article.aspx/766/10/what-is-dns-server-name-server
Does this help - the waiting doesn't look good:
If a pod is stuck in the Waiting state, then it has been scheduled to a worker node, but it can't run on that machine. Again, the information from kubectl describe ... should be informative. The most common cause of Waiting pods is a failure to pull the image. There are three things to check:
From <https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/>
i would write an operator (kopf, operator framework sdk, etc) https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
here you want exactly 1 pod. you have 2 choices: 1. use a deployment, replicas=1. The advantage here is the autoscaler, cordon/drain etc still work (they have a hard time w/ pods w/o a replication controller) 2. you be the replication controller in the operator
in either case, your operator will have an api connection open to firebase. when a customer joins, it will talk to the kubernetes api to create. in normal operation, if the pod dies or is deleted somehow, it will re-create when the user leaves, the pod would be deployed.
this is quite simple to do in kopf in my experience.
i would recommend using a deployment replicas=1 so you can handle migrating around at your cluster scales.
I presume the pods aren't being scheduled on the right nodes. Have a look at this article. Try explicitly setting the node selector how you want it distributed and see if it resolves. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
I believe you would want to run a Kubernetes job to do the tasks
https://cloud.google.com/kubernetes-engine/docs/how-to/jobs
Here is an example of using jobs with a queue
https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/
We have a very low use of grafana for now, it's very very cheap on cloud run. It's behind a lb/iap and authentication is made with Google in the app https://grafana.com/docs/grafana/latest/auth/google/
We configure it with terraform, with all the bits : secret Mgr, vpc connector, etc. It's quite a pain in the ass to configure as everything is not in terraform, like oauth consent screen.
We use it with bigquery with https://grafana.com/grafana/plugins/doitintl-bigquery-datasource/
You have https://grafana.com/docs/loki/latest/clients/promtail/gcplog-cloud/ or you could use a bigquery sink for logs.
And there is a mysql source for grafana, so querying a mysql on a vm should not be a problem.
All of your sites can run off a single instance.
GCP does have functionality to set up your own SSL certificate, which can be got for free from services such as this https://letsencrypt.org
As for the price, its really hard to say I run a NodeJS bot on an instance, the cost varies from day to day depending on how much its used. I've currently got my instance configured so that it's running on the bare minimum (cheapest option) and it'll automatically scale up the resources if it needs more resources.
Set up a small GCP compute cluster with cloud scheduler to sync stuff through using rclone.
Edit: If you're savvy, you can set up an app engine instance to cut down on costs as rclone works as a container - pay what you use!
Yes, that is the solution I would propose. Way faster and cheaper to serve static files from cloud storage than query Firestore once the number of objects is large enough.
Distributed counters is the go-to solution for that if you're going to be doing them in Firestore. Alternately, keep a collection of "plays" for each level, and then roll them up into a count and periodically delete them once counted.
On line 8 you are listing the documents and then you are getting them, you don't need to do that, you can just use the forEach like in this example. Yes, it does one read per each object.
As a side note, is this real-time updated or shared state? Or are you just loading a level and then the state is managed on the player's computer? If it's the latter, I'd build a static file based on the Firestore data that can be loaded by the client via some other means. This would eliminate loading from Firestore at all in the nominal case.
As you mentioned you can indeed use firebase hosting as an API gateway to Cloud Run
https://firebase.google.com/docs/hosting/cloud-run
As you also mentioned you can configure endpoints to work with cloud run
https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-run
The difference here is to get cloud endpoints to work you have to deploy a container to cloud run that acts as the api gateway proxy to your service. Cloud Endpoints is based on NGINX.
If you go the cloud endpoints route the config is more involved but you would be able to track API activity as well as create a developer portal for the API if you liked.
If this is simply a use case where you want to have a unified URL to route to multiple Cloud Run instances based on path i would do firebase hosting
You can initialize again in firebase hosting and you can change static folder. https://firebase.google.com/docs/hosting/deploying?gclid=CjwKCAiAnfjyBRBxEiwA-EECLDlQi3IuL0PoaX8mC912lL4fEMaYpdyE48aEf3OQ-0fttTriO-v3oBoComMQAvD_BwE
You mentioned "data logging", and GCP provides several managed logs services (Stackdriver Logging, BigQuery). But I don't think this is what you want.
It sounds like you want a database (not a log sink), since you plan on reading that data and making it part of your application. If you want the database to manage permissions by itself (without developing your own application), then I suggest checking out Cloud Firestore. I haven't used it myself, but my understanding is that Firebase and its components (including Firestore) were originally intended to be used directly from untrusted client applications (like web apps or mobile apps), and so you can implement per-user access control in the database and give clients access to write directly to your Firestore database.
No, you're listening to too much FUD. You're also confusing their free consumer products (of which they're not selling their data to begin with), and business products that customers pay for. Two completely separate business pillars.
To be more specific, the privacy page you linked to deals with Google's consumer products, which does not cover Firebase (and other Google Cloud Platform related products). Firebase is COMPLETELY different, and falls under their business cloud services. Here is Firebase's privacy policy: https://firebase.google.com/support/privacy. Notice the list of ISO and SOC certifications, which is short for "they're not selling anything".
TL;DR - Google is not selling your Firebase data for profit, as that would be a completely idiotic thing for them to do, as every business that uses their cloud platform would drop them instantly. Period.
For firebase functions, I'd recommend using the config settings. I'm not sure that they can access the environment variables for env.yaml, though it might be another thing that's duplicated between gcp and firebase without good documentation.
The config variables are documented here: https://firebase.google.com/docs/functions/config-env
That is extremely helpful. Thank you SO much. I really like the idea of making the writes very specific to make it difficult to "reverse engineer" them. Here's another question for you, if you don't mind.
When I make an HTTP function "callable" in Firestore... https://firebase.google.com/docs/functions/callable
It allows you to disable writes from users that are not authenticated. Does requiring the user be authenticated give my app a lot of extra security? Because authenticating can be as straightforward as logging in- and once they're in they can do malicious things, right? Or, requiring authentication would make programmatic attacks very complex?
You should not have a production app db rules set to allow all reads and writes.
Firebase allows you to fix this, it just requires some thought. And it is important to know that if any rule allows an action, that action is allowed. You can use that fact to selectively allow certain operations.
For example, set your global rule to allow read/ write if user is authenticated.
Then, on you products collection, allow all reads.
To make secure writes to those product documents, allow writes IF the write has exactly the properties you expect, no more no less.
So, let's say the write back expects a quantity and a user I'd and a date. You allow the write if the request lenght is exactly 3, and it contains exactly one date, one number, and one string, and if that number is less than X (whatever your max purchase qty might be) and if the length of the string is 20 or less (or whatever the max length of a user I'd is).
With all of those rules in place, you have allowed writes but made it very difficult for your DB to be hijacked, because the writes that are allowed are so specific.
The examples above leverage the request.data property of the incoming write request. Learn more here abouts:
https://firebase.google.com/docs/reference/rules/rules.firestore.Resource.html
Egress to other google cloud services within the same region is free. FCM is also free.
You will get charged bandwidth when the user accessed the app after receiving the notification and that data is egressed to them.
https://cloud.google.com/compute/network-pricing#internet_egress
Firebase makes auth pretty easy, I never used Shopify, but if you can put in some javascript, you have 3 steps:
1) import firebase pack
2) init firebase
3) make the authentication
​
1) firebase is really flexible on this, you can use the npm package, use a CDN and if you combine it with firebase hosting, you can also make the hosting server inject them when serving the page
2) you will find in the console a snippet to do it for your project
3) based on the type of auth you want there are different snippets in the docs in different languages
The docs are really well made and full of examples and snippets
you can find the auth snippets https://firebase.google.com/docs/auth
Ok. I'm going to read up on what Dataflow can do and how it works with Cloud Composer. Eventually, we would want everything in BigQuery, although maybe not for an MVP.
I also came across a Firebase extension, "Export collections to BigQuery": https://firebase.google.com/products/extensions/firestore-bigquery-export/
I'm not sure whether it'll replace a more complex ETL, as it's limited to listening to document changes. Although, perhaps I can get the nested documents at a couple levels by setting up a couple listeners and using wildcards:
> Note that this extension only listens for document changes in the collection, but not changes in any subcollection. You can, though, install additional instances of this extension to specifically listen to a subcollection or other collections in your database. Or if you have the same subcollection across documents in a given collection, you can use {wildcard} notation to listen to all those subcollections (for example: chats/{chatid}/posts).
Nice! I actually just started doing something similar a few weeks ago, but used firebase hosting instead, I'm new to hugo and am pretty clumsy in wiring it into cloud build, but I think I'll be able to learn some things from this blog post and repo :)
In the documentation, they recommand Firestore for new project https://firebase.google.com/docs/database/rtdb-vs-firestore
​
>> We recommend Cloud Firestore for most developers starting a new project
The smallest form of Compute Engine (which is just VM hosting) is completely free, but with not much processing power.
I'd suggest App Engine (google-managed containers) in conjunction with Cloud SQL (MySQL), both of which I believe have a tier of free usage that gets applied before billing.
Oh, you're right. I knew they'd added per second billing in response to GCP - I didn't notice that caveat in the bullet points on this page - https://aws.amazon.com/ec2/pricing/ - not totally obvious. I don't run Windows workloads so I hadn't really dug deeply.
Interesting that GCP does per second with Windows, as my understanding is that AWS is just passing on the licensing cost from Microsoft here - I'd have thought GCP would be the same. Maybe they're absorbing it as they figure that people don't typically spin up and down a windows VM in the same way they do with Linux.
Vercel is usingdynamic IP addresses which prevents you from connecting using Authorized Networks... You would have to explore alternate connection options - perhaps authorizing with certificates...
If you were willing to go with a multi-vendor cloud then Oracle offer APEX which is perfect for things like this. If it's just a single small database it would probably for within their free tier as well.
Have you considered AppSheet which is now part of GCP?
You may want to use the terrafotm-ct-provider. You need to install it manually.
Also, coreOS container Linux reached end of life: https://coreos.com/os/eol/
You may want to try the alternative there or Flatcar Linux: https://www.flatcar-linux.org/
But don't go with coreOS container Linux :)
Nailed it. OP the tool you're asking about is called a histogram, they have a decent description of the idea here.
For metrics like load average it might provide a bit of savings in terms of reducing the number of discrete counters you consume. For example instead of recording 60 values per minute you record say 10 numbers. How many seconds was the load avg between 0 and 10 percent, 11 and 20 percent, 21 and 30, etc. etc. So the final values reported to your monitoring system for a given minute might look something like [2, 3, 5, 15, 25, 25, 15, 5, 3, 2].
Okay you now have a 6x reduction in metrics reported, not bad.
But if you're thinking about recording something like latency for requests and your application is taking maybe a hundred requests per second and you can turn every one of those 6000 latency values per minute into a counter in one of 10 buckets and maybe throw in a min/max/avg value for good measure (pun intended) now you've got a 600x savings in the amount of data you need to record to get visibility into your app.
Hi u/SardaarG
I found one website some time ago when I wanted to make a personalized gcp teeshirt for myself. Although it doesnt look like an official google icon from google, the definition after downloading the icons are very good. You can download the icons either .SVG, .PNG or .JPG
Checkout the website here: https://vecta.io/symbols/4/google-cloud-platform
Yeah, SendGrid is a bit touchy. If you don't configure your application to correctly use the from-address it will likely result in a suspension. I've seen thousands of emails be sent successfully over a month and then suddenly one day: poof. Suspended. They're not really keen on telling you why, either. It took a solid two days of escalation and throwing weight around to get any sort of feedback from them. I would definitely recommend switching unless they've changed how they operate...
Mailgun and AWS SES are both pretty solid. And, of course..
Depending on the use case, BigQuery can be a great time series DB. Firestore and Firebase are a bad fit for time series data in my opinion.
Also using a third party InfluxDB provider like this one: https://www.influxdata.com/products/influxdb-cloud/gcp/ might be a good choice
I had asked this same question in other sub-reddits and seems like I found two solutions and we are starting to use the Apache Guacamole service. It's a free service which allows both SSH & RDP.
The cost is for a full load balancer (not the cost of the IP) that is being created probably due to using an Ingress. In GKE the behavior for an Ingress is to spin up a load balancer.
If you want to continue using an Ingress you will need to use a custom Ingress Controller like HAProxy or Traefik that might be able to avoid creating the load balancer resource.
Another possibility is to drop the Ingress type and use the ExternalName service type.
This is probably what you want.
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#request
There are example request and response bodies, both of which should be AdmissionReview
objects.
Here is the referenced k8s API type, given that it's a public Go module then you probably could just json.Unmarshal()
onto a var of the type admissionv1.AdmissionReview
(via import admissionv1 "k8s.io/api/admission/v1"
.)
I'm not aware of a way to do this through private networking only. There are other ways to achieve secure communication however:
For generic Kubernetes services of type LoadBalancer, just create a public LB (by omitting the internal annotation) and set loadBalancerSourceRanges to the public egress IPs of your remote site, and GKE will automatically configure the FW rules appropriately.
You can alternatively use any Ingress controller that is fronted by a Kubernetes LB service (e.g. Nginx, Istio) by creating the front-end LB with the same loadBalancerSourceRanges setting.
Another alternative, for HTTP(s) traffic only, use GCE native load balancing and restrict access to it via Cloud Armor.
From the error you're seeing, it would appear that a selector is required on the metric you've specified in the HPA:
>kubectl get pods - - all namespaces
Ok that's definitely helping. Getting some output here.
I ran:
kubectl get pods --all-namespaces
I see the name of this application and its namespace in the other column.
It's named "account-app-6759c67b85-xhk86" in namespace "dev".
So googling further commands to try interacting see this yaml you're referring to, , I'm coming up with this recommended from kubectl cheatsheet:
kubectl get pod my-pod -o yaml
Which to get to work in my command console I have to run as:
kubectl get pod account-app-6759c67b85-xhk86 -o yaml --namespace "dev"
I see this section in output:
spec:
containers:
- env:
- name: CONNECTION_STRING
image: account/app:latest
imagePullPolicy: Always
name: account-app
gcloud and kubectl are two different things. You should probably understand that before you plan on building a CI/CD pipeline that deploys containerized applications on GKE.
​
Read up on configmaps: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
> Take a look at node affinity https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
Thanks, I will try out nodeSelector.
> Are you trying to access/transfer files that are used/created by a specific container?
I'm not sure. I cite you the command I ran from:
https://github.com/helm/charts/blob/master/stable/wordpress/README.md
helm install stable/nfs-server-provisioner --set persistence.enabled=true,persistence.size=10Gi
From my understanding this created a disk that can be mounted via nfs. I can even see the disk using
gcloud disks
How can I access that disk from my local machine? I thought there would be a command like "gcloud disks mount", but there isn't.
Oh, yes. All I've seen has been heavy recommendations against HostPort, but that may be moot for cheap little setups.
Found it:
> Limitation: Due to #31307, HostPort won’t work with CNI networking plugin at the moment. That means all hostPort attribute in pod would be simply ignored.
https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#cni
My use case is to setup db for grafana ha, https://grafana.com/docs/grafana/latest/tutorials/ha_setup/
nothing fancy.
What's your idea for reading replica or horizontal scaling?
Try TightVNC if you can. https://www.tightvnc.com/
It's weaker security than RDP or TV. At a minimum, lock the ports down to only your IPs and consider using a VPN since TightVNC does not encrypt the screen feed. https://www.tightvnc.com/faq.php
There may be a way to make it work with TeamViewer, for example by disabling locking the computer on disconnect, but I'm not able to test it atm.
set up a conceptual overview on an excalidraw session, you're more than welcome to give it a look over. Can look at getting a repo set up this week if that's something you think the community would find useful.
Happy Monday.
You can set it up for external collaboration so you can enable clients to upload to your instance but you control the content. Here's a video.
If you have additional questions just let me know.
Another option is Apps Script. (https://script.google.com)
AFAIK this support both HTML parsing and extracting text from PDFs, and has built in cron.
You could also use it just for the cron part to kick off a Cloud function.
This has a free quota for personal use.
Use FastAPI instead of Flask. Will be faster and easier. Super easy to use on AppEngine too. Also, I'd recommend using orjson for the JSON parsing. It'll save you a lot of headaches with formatting and again be faster.
Cloud Firestore is available in us-central-1, but as multi-region: https://firebase.google.com/docs/firestore/locations
You'll probably be fine with having the app and DB spread -really depends on the app. I guess see how it goes.
I guess Firebase Realtime Database would be the best choice for the messaging part of the app. For other stuff, like user profiles, blog, settings etc, Firestore would do.
I see Firebase pricing is quite affordable: https://firebase.google.com/pricing/. $25/month can get you 100k simultaneous connections with the Realtime Database.
They have free services, in which you "might" be able to make a low end webserver using the Google Compute Engine specifications. I use it currently (with expanding RAM on the disk) for a small Ubuntu server and never receive any charges from it. https://cloud.google.com/free/
You could also maybe upload them into Firebase products and see if their Free plan works. https://firebase.google.com/pricing/
However, there is not a straight LightSail equivalent. It is something I like with AWS as it is very simplified and cost controlled.
Just install R and RStudio on a VM:
https://www.rstudio.com/products/rstudio/download-server/
​
Then start/stop it whenever you need. It can be costly to keep a persistent machine running. Coursera has 1 month free on any of the gcloud courses, google cloud has built in training in the console, and also a separate page for tutorials and community tutorials.
Answering your question, I'd say Mongodb stitch https://www.mongodb.com/cloud/stitch it is the same as the firebase ecosystem, (a serverless platform) but free quotas are vastly more generous and also avoids vendor lock in: After all it is just Mongodb, you can easily migrate anywhere you want. From there (although it's quite cheap to keep scaling there)
Nodejs in Debian/Ubuntu repos is out of date... https://nodejs.org/en/download/package-manager/#debian-and-ubuntu-based-linux-distributions
EDIT: probably need to install nodejs package as well...
https://workspace.google.com/products/cloud-search
This is Google search as a service. Available for 3rd party data and standalone (i.e without needing to have Google workspace)
Has a rest API and connectors to index content. As well as a rest API to do searching.
Unfortunately, it's not pay as you go.
I used Linux Academy (I guess now A Cloud Guru), Coursera, and Dan Sullivan's official study guides for Google Certified Professional Architect, Data Engineer, and the Associate Engineer exam. The three different sources of training emphasize different areas and I found that useful to get different perspectives. While I took the course work for each of the exams, I also built apps, projects, solutions, etc. on the Google Cloud platform in my personal account. I passed each exam on the first try.
You can buy Dan Sullivan's books on Amazon and they come with sample exams.
Here's the link to the Data Engineer book. You can search for other books from him, as well:
https://www.amazon.com/gp/product/1119618436/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
I used these resources to help me pass
This is one of the best exam guides and is very recent.
​
Dan Sullivan’s book is the best resource and it comes with practice tests hosted by sybex. There’s some aspects that are dated but if you pair this book with Linux academy or udemy classes you should be ready. It’ll likely take more than a week to finish the material.
https://www.amazon.com/Official-Google-Certified-Associate-Engineer-ebook/dp/B07Q8BXDST
You want a VPN provider, not GCP. Look at NordVPN or Private Internet Access (PIA). Or ask someone (young) from China, they all use VPNs to access blocked sites.
This is also much cheaper than a VM in the long run, and saves you the maintenance - plus, you get VPN endpoints all over the world. I think NordVPN is $100-120 for three years.
There's an official study guide book you can buy, it comes with practice questions in the book (like 100) and you can register you book online to use a randomized bank of like 400 questions.
Book is here. https://smile.amazon.com/gp/product/1119564417/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
So.... after lots of playing and tweaking here is where I am at...
I tried out Private Tunnel and it worked just fine from the command line.
I am now trying out PureVPN and it did that whole blocking thing. So I think VerizonFIOS (or my router) is blocking UDP or the port for the VPN. Using PureVPN with UDP on port 53 doesn't work. Using PureVPN with TCP on port 80 (their other possibility) works just fine.
I’m currently reading Google Cloud Platform for Architects by Packt. Check it out.
Google Cloud Platform for Architects: Design and manage powerful cloud solutions https://www.amazon.com/dp/1788834305/ref=cm_sw_r_cp_api_cPJ5Bb62V5C3Z