Check out the post by Google developer Gregory D'alesandre in this thread. He says they're working on a new scheduler that should help...for one user with current costs of $1.40, he said the new scheduler should keep the new cost to $3.50 - $5.00. On top of that, for Java users they just launched the ability to multithread, which would reduce costs further.
They also made the high-availability datastore cheaper per gigabyte. But datastore will now charge per 10K operations, and they haven't yet nailed down what counts as an operation...they couldn't answer someone who asked in another thread whether a multiget is one op or lots of ops.
What I've seen people complaining about the most is the per-instance charge. They're looking at the number of instances they've got running according to their dashboards, and calculating how much that will cost them compared to their current cpu fees...and it's looking like the new price will be a lot higher.
A Google developer said (in the above-linked thread) that they're implementing a new scheduler that will reduce the number of instances, but he still estimated that the new price would be several times higher. For Java apps they're adding multithreading, which will help if you build your app to use it. I'm guessing they'll do that for Go before long. Python might be a challenge.
What I'm wondering is how they'll handle various APIs like XMPP, which so far has been charging only for cpu usage. Now they've got a charge per stanza, but they're not charging for cpu. Will there be instance fees associated with that?
The datastore API is charging by the operation too...that's another that might be a wash, since there were cpu charges there too. They haven't quite settled how they'll charge for eg. multigets, but said on another thread that most likely each get in the multi will be charged as a separate operation.
Compared to AWS RDS, it's pretty cheap. The smallest/cheapest option on AWS is $0.105/hour = $2.52 a day, plus $0.10/GB/month storage and I/O, etc.
The cheapest option on CloudSQL (D1, 512MB ram, 1GB storage, and 850k ops) costs $1.46/day. It's not apples to apples as the small instance on AWS has 1.7GB ram and probably faster CPUs (hard to say from the docs), but it's still an option. You could install your own DB (MySql/Postgres) on a Micro instance EC2 for about $0.48/day.
I also believe from Cloud SQL's docs, backups are available free of charge for storage, but cost on AWS, so if that's true, there's value add for what you're paying for.
Still not as cheap as I would have liked (wish there was a small free instance to play with, not counting local development environment).
Btw, I've setup a Google Doc comparing App Engine to EC2 and Heroku (probably add Azure later) that helps compare things. It's hard to compare GAE and AWS a lot of times as GAE is purely a PAAS while AWS is more infrastructure as a service with some PAAS (RDS, SES, SQS, etc)
https://docs.google.com/spreadsheet/ccc?key=0At9xIZA2GNQYdDRpSzczQ0hJX3YzeExrMnNZbVNjaXc
You're thinking of this thread I believe. The reply there links a help center page which explains it a little more. This is an EU tax matter. I'm neither an EU citizen nor a tax expert so I can't offer specific advice for the OP, but many users have choosen to register their applications as "business use" and handle the tax payments themselves.
I can't find it now, but there was a thread about this on the AppEngine mailing list a few months back. It has something to do with taxes for certain countries in the EU. Calculating/collecting for a business account is different (and apparently easier) than for an individual.
One thing to note about GAE is that the resources scale with the traffic. The resources allocated are for every frontend instance. After you start seeing latency because of concurrent requests, additional instances will be spun up. When you are renting a virtual server, you are capped at the size of the server itself.
Don't forget about the free GAE quotas. You only end up paying when you exceed the free quotas.
With regards to platform fit, it can take some time to get larger applications running on GAE because of some of the platform restrictions. If you have anything that runs longer than 60 seconds, you may need to use a backend or a task queue, which each have their own learning curve.
It's really sad, App Engine used to be SUPER powerful, the deferred queue system was my favorite feature, and the ndb datastore. Having everything baked into the python2 runtime environment was a treat. Now they expect you to provision and manage separate services, and are deprecating everything else.
The previous App Engine developer experience was one of our inspirations for building Supabase, we don't have all the features yet, but it's the direction we're heading.
I did a genetic algorithm project in Clojure a while ago: you can see the tests I used here. I tested functions like the fitness one in the obvious way, and made a non-random version of random functions like breed. It looked something like this
(defn breed [str1 str2 position] ...) (defn breed-randomly [str1 str2] (breed str1 str2 (rand-int (count str1))))
That way, I could easily write tests for (breed ...).
I spent a while trying to answer this exact question and in the end I settled with the implementation by Bill Katz that you mentioned. It is very simple to use and does a great job at keyword search. The stemming is a very nice addition that I am definitely using. Porter2 is a very good English stemmer.
One thing to note: if you want to filter models by keyword AND other properties then you need to add some functionality to Bill's solution. Someone posted a start to that in the issues section of the github page (https://github.com/DocSavage/appengine-search/issues#issue/4). If you go adding features to his solution always be careful you don't end up creating exploding indexes.
As for the official solution from Google: As of the last SDK release they finally put it on the list of "Features on Deck" on the product roadmap (http://code.google.com/appengine/docs/roadmap.html) which hopefully means they are actively working on it. Personally I hope they get SSL on third party domains done first.
Good luck.
The link to the specific incident was already posted by another, but there is a Google Group specifically for the purpose of outage notifications: google-appengine-downtime-notify
I'm pretty sure App Engine is not considering adopting another implementation. It's already hard for them to keep both Python and Java virtual machines up-to-date and synched with the latest features... I would imagine if they choose another implementation it would be PHP given the amount of request for it. So I don't see Node.js in the foreseeable future on App Engine.
But I mean, who really cares about implementation? The JVM supports tons of runtimes and ApeJS itself is using Rhino which is extremely solid... why bother trying to use another implementation?
If it were for me I would also ditch native Python support in favor of Jython :) ... so the App Engine team can concentrate on one single implementation avoiding un-synched features across different implementations.
Sorry for repeating implementations that much :P
If you don't mind having <string>@<appid>.appspotmail.com in your email address - all you need is to listen for incoming emails and forward them to your primary one (not as scary as it sounds, 5-10 lines of code).
See more: https://developers.google.com/appengine/docs/python/mail/
If you do need to keep your "someFakeDomain.com" domain (which you own or have access to DNS settings) then you can signup with services like SendGrid, again listen for incoming emails and forward them to your primary one.
See more: https://developers.google.com/appengine/docs/python/mail/sendgrid
I suggest you use a scheduled task with cron to cache. This will simplify code and separate concerns. One script caches, cron runs it ever 10 minutes. Another delivers the cached rss to the client. https://developers.google.com/appengine/docs/php/config/cron
> what kind of costs can i expect for something like this?
Without seeing the app, it looks like you'll mostly be using data transfer quota sending the cached rss to the app. You're not remotely touching the fetch quota limit. So long as one instance can handle the number of requests for the cached rss you'll not be paying for extra instance hours either. You'll likely need quite a bit of user growth before hitting free quota limits. Assuming the app is written to not devour resources needlessly of course.
https://chrome.google.com/webstore/detail/cngpndgifehgejmkemnmmiknpafnhpec
You can also share from Reader to Google+ via Chrome extension by Sebastián Ventura (https://github.com/lomegor/google-plus-reader).
Additionally, it can display Reader folders in Google+ left menu, which looks fancy, but isn't nearly as usable as Reader when it comes to quickly scanning some of the 2k+ RSS feeds inside 100 folders. Can be turned off in options, but may come actually handy if you have less feeds subscribed and want to stick to Google+ interface.
Some suggestions for writing the logs to disk are given in this stackoverflow thread. Once there you could use any other logging tool (e.g. tail, elasticsearch...)
It depends on what framework you're using, assuming you are using Flask, you set it on the response.
Flask documentation with an example: http://flask.pocoo.org/docs/1.0/api/#flask.make_response
Hi PanosJee! Theoretically, yes. See https://github.com/rodaebel/instantlist/blob/master/src/instantlist/js/instantlist.js#L96 for a sample. Currently, we have to construct an "empty" client-side entity with a distinct key and then just sync it. As you might notice, this only works if we already know the exact key name. However, I'm planning to implement much handier API methods for loading (importing) server-side entities. I'd propose something like this:
storage.load([key1, key2, key3]);
"With this new pricing, developments will be driven by the costs. I like to optimize my apps to make them better or faster but optimize them just to make them cheaper is a waste of time." - https://groups.google.com/forum/#!msg/google-appengine/obfGjbIkOTI/pnbxP3PFbK8J
No matter how good the results are from tuning, it's pretty much a failure if a developer has to spend time tuning, not for performance, but for cost reduction.
I just switched a Tipfy 1.0b project to Flask using this template as a start, and so far really like it. The docs are top notch on Flask's site, and have a tons of extensions available.
I really like having the selection of function or class-based views (aka request handlers / controllers). The extensions to some other libraries I use (WTForms / Caching / Creole parser / etc) are also nice to use. I've noticed my LOC have decreased and readability improved from making the change. You'll also have a nice web framework for non-appengine projects.
The only thing I liked better was how authentication was implemented in Tipfy, with support for OpenId providers or Google accounts without code change, but I can get most of that for free straight from the appengine API (if you trust their experimental openid from not changing much). I also read on Tipfy's mailing list that Rodrigo plans to remove the auth package before 1.0 gets released (if/when).
Thanks for your reply.
I had tried both approaches OAuth tokens and directly with user name password but neither approach worked.
I think it is not possible to connect to google's servers from GAE using PHP as per:
https://developers.google.com/appengine/docs/php/sockets/
Under limitations it says:
"Private, broadcast, multicast, and Google IP ranges (except those whitelisted below), are blocked"
I gave up on GAE and used a regular hosting service
Thanks
Hi there, I have solved the issue in the meantime.
First I used ext.api.files API (which is deprecated now) then i moved on to GoogleCloudStorageClient ( https://developers.google.com/appengine/docs/python/googlecloudstorageclient/), which has a lot more utility.
They also provide open() managers and retry handling, for tests everything needed is a urlfetch stub.