Price comparisons:
Amazon S3 (USA): $8.5/mo for 100GB, $85/mo for 1TB
Amazon Glacier (USA): $1/mo for 100GB, $10/mo for 1TB
Azure (USA GRS): $9.5/mo for 100GB, $95/mo for 1TB
I'm not sure how Google can offer these prices, except that most users will pay for way more than they use. They may have some clever cold storage trickery too, but I haven't heard of it.
Edit: downvotes? Did this really not contribute to the conversation?
I think the sensationalism of this article is getting to some people. I see a lot of comments that this is "cloud storage" and should never have failed. That simply isn't exactly true -- EBS is a high-performance volume for use with other AWS services where failure should be tolerated. EBS is only replicated inside the same facility, and it says that all over the specs for the EBS product. That's because EBS is designed to be fast, efficient storage for your AWS machines to use for reliable storage while they're running. We NEED an option that trades durability for speed and that's what EBS is. Anyone implementing AWS with half a brain knows that critical data can't be stored on EBS for long periods of time. A tornato, earthquake, or hurricane could destroy your data.
I think some of you may be confusing EBS with S3 -- S3 is the true "cloud storage" that is designed for extreme data durability. The data is not only replicated between facilities in the same region, but also replicated to multiple regions. In fact, Amazon touts "99.999999999% durability" for S3.
Oh, and guess what? You can set EBS snapshots and store them to S3 as often as you want. We do it hourly. Losing an hour of data in the event of a disaster isn't the end of the world.
More reading: http://aws.amazon.com/ebs/ http://aws.amazon.com/s3/
$2.12
I don't know what I was expecting, but I wasn't expecting that.
I use AWS for all the images on my website, because after doing some testing there was a notable difference in speed. I used the free tier for one year, then on November 1st, I started having to pay. I didn't really pay attention to the billing amounts in the past because I wasn't paying but I was expecting a much higher price.
Also notable, this is the biggest month I've in the history of my website, I hit the front page of Hacker News 3 or 4 times, and the front page of Lifehacker and Adafruit industries blog. Most of my days were around 5k uniques, with spikes up to 20k with the plugs from the big sites.
All I can say about this is if you're thinking of using S3 for your CDN to make your website faster and save hard drive space, don't let monetary cost be an obstacle. This is a hell of a bargain, that's why I'm sharing it today.
Estimations used:
500MB update
1.5 Million Homestucks trying within 12 hours to watch
My numbers for the cost of hosting on the amazon cloud hold for any length of time up to a month with the same loads.
All numbers crunched in javascript in my browser console
Amazon pricing from this page
This, actually, is kind of true. Large CDNs do charge more for distribution in Australia and New Zealand. I've worked with Akamai and Amazon S3 is very up-front about pricing differences on where you are. The fact is that it costs more to distribute digital goods there, those costs are then passed onto the consumer (with a little taken out for daddy, of course).
Quick anecdote: one job I had, we did the math and it was cheaper to buy hard drives and ship them to Australia and then throw the hard drive away than it was to send a digital good.
EDIT: Fixing typo cause I'm a dummy.
The storage will cost you almost nothing, but transferring 2 TB out of S3 will cost you about $180. You get 2 GB free with Dropbox, so why not just put the file there and point a bit.ly link to it?
peppy uses amazon s3 to store every accessible file
http://aws.amazon.com/s3/details/
i don't think it's a limitation caused by hardware
edit: whoever downvoted me, fuck you. why do i even contribute to this subreddit!??!?!@# ?"~@#e./ " !@;'/.,l ,;'
;,.l'l.;', ' ; '
fuckign ingigersre
From the perspective of a computer engineer:
- 1bil is way more than the amount of Google Drive users - there were about 250mil who used the service monthly as of last fall.
- When estimating 'lite' vs 'mid' vs 'power' users - usually the distribution is not a bell curve but more like a 'reverse exponential' - i.e. most users will be 'lite' users, then the next biggest group will be 'mid' users, and after that maybe 1% would be 'power' users. Not that it matters because...
- Bandwidth is cheap. Magnitudes cheaper than storage space, for example the cost of storing one gigabyte might be 10-100 times the cost of transferring that gigabyte.
- How much does storage cost then? That's also pretty cheap - maybe a a tenth of a penny or less / GB / month. Google charges to buy additional storage but it's hard to tell whether that covers the cost of the additional space because they could possibly also cover the cost from ad revenue. A better indicator of storage costs would be Amazon. So assuming $0.001 / month / GB, 250mil users at the free tier (15GB space, less than 1% upgrade so paid tiers are negligible):
250,000,000 * 15 * 0.001 * 12 = $45mil / year, or $3.6bil over eighty years (assuming that storage costs stay the same and number of users stay the same).
Also a quick note, Moore's law ("every eighteen months it will halve") doesn't necessarily apply to storage.
The issue with that is, bandwidth caps will become increasingly and quickly outgrown as the internet continues to grow and expand. This post does a pretty good job of explaining it.
Even if they decide to waive the caps and go on a purely usage based model such as other utilities, they're currently allowed to charge up to $2.50 per gig used. You really think that they're going to be willing to give that up? Especially when research estimates put actual bandwidth costs at approx $0.02/gb? Hell, I get charged less from amazon for their service than my ISP charges me for the same bandwidth.
If we convince them to impose "only" a 200 or 250gb cap, then when we start to need more because of the way the internet grows, the exact same issue will arise. If we convince them to lower per gb charges, we will also run across the same issue down the road when their costs begin to lower, but ours dont.
Except the next time it happens, they'll be telling us that this was what we wanted and it's not their fault.
Amazon S3 is your best best. Its fast and the cheapest solution. I've used it a few projects that needed it.
Glacier storage is going to cost you $0.0100 per GB per month. Making requests to files (GET/POST) for example, is $0.004/$0.005 under 1000 requests.
You can check out the pricing.
Depends on what you intend to do with the data. From your description, it sounds like the context you have in mind is game media. If so, digital downloads via the Internet are probably the cheapest. (Amazon S3 is $0.03 or less per GB.)
The next cheapest is CD or DVD. Pennies per disk when purchased in quantities of one thousand or more.
For local storage, internal hard drives are the cheapest way to go. However, mechanical hard drives can and do fail, so mirroring the data across two or more drives is advisable. (average cost per GB in 2013 was about $0.05)
Tapes are economical too, but you also need a tape drive and tape performance is quite slow.
Lastly, you have USB hard drives and flash drives. Prices are coming down on these every day. However USB hard drives tend to live for only a few years and flash media has limits on how many times you can write to it before the capacity begins to degrade. 2GB and 4GB flash drives currently retail for about $2 each. That price can probably be cut to less than half when purchased in large quantities direct from the manufacturer.
TL;DR: Backblaze will store data in 1 physical location for 0.5 cents a gig.
S3 stores copies in 3 separate physical locations for 0.7 cents (Glacier) or 1.25 cents (Standard-Infrequent Access) per gig.
S3 stores copies in 2 separate physical locations for 2.4 cents per gig (Reduced Redundancy tier).
Reasonable pay-per-usage is fine. "Reasonable" would be about $0.09/GB, so for 300 GB (Comcast's cap), the price would be $27. We could generously double it to $0.18/GB, for $54--still less than a typical Comcast bill today.
That's not quite right - it varies by region.
> Amazon S3 buckets in the US Standard region provide eventual consistency. Amazon S3 buckets in all other regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
Here is the pricing list from Amazon. Granted, that is just bare storage with no interface, but there is no way in anywhere Evidance's interface is worth the price. Google has a bit more robust interface, and thier price reflects that, but they are still rediculously below Evidance. Here is a list of some various prices, in which Dropbox's $0.10/gb is refered to as "gratuitously overpriced".
My guess is that either there are some evidentiary rules, regulations governing digital evidance, or some such nonsense that Evidance is putting a premium on their being able to satisfy. That or they are banking on LEAs not doing proper due diligance on vendors.
I was thinking about this, too.
According to amazon's pricing page, http://aws.amazon.com/s3/#pricing, if they want to store 10 PB (10,485,760 GB), that'll cost them $576,716.80 per month (at $0.055 per GB per month). That's assuming they want to be able to serve all of it.
That's assuming they want to serve all of it. Putting it all in glacier would still be $104,857.60 (at $0.010 per GB per month).
Without making too many assumptions about their operating expenses, I'll just say it's not at all obvious to me that s3/glacier would be a good deal.
1 GB of transfer from S3 currently costs $0.09. How much do you think a developer hour costs? How much does it cost per minute if your service is down? If saving bandwidth is your greatest concern, you have messed up priorities.
For static content you could try Amazon S3.
Free tier is up to 5GB and you shouldn't have to worry about traffic since it will scale on demand to fit the traffic.
There's also a checkbox for like a website mode so it will serve static webpage like a jekyll site.
If you need your own domain you can also use a CName with S3, however it's a bit tricky to setup SSL with it.
Those 11 9s on S3 are actually for durability, meaning even if the service is unavailable it's incredibly unlikely that they've actually lost any of your data. The actual uptime SLA is 99.9% monthly.
If you go by GOG model of "just serve installer" with no DRM or other crap about 0.1$ per gigabyte
But with DRM it will be higher as you actually need to authorize every request to play a game. But processing power/bw over liftetime of game will still add just just a few % of actual game cost. Especially if you authorize "all at once" like steam.
Come to think of it, drm is actually increasing cost of game development, as you need to spend money to develop it (or integrate into your game in case of steam) and pay for servers
I'm just ballparking based on S3's pricing. Once you get to the point where you're transferring out several hundred terabytes of data from S3, you're paying about $0.05 a GB. Titanfall is about a 40GB download, so that comes out to $2.
> When the Amazon S3 servers die... and they will die as history has shown time and time again.
S3 is one of the most reliable products available. On their product page they state:
Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
That's a lot of 9s. And we've not seen anything to disagree with that.
> What happens to the CSS code? Is there another backup of it?
If something did happen to the objects on S3, then your CSS would be safe in the wiki.
$0.055 per GB Storage on Amazon S3.
$0.050 per GB Data Transfer on Amazon S3.
http://aws.amazon.com/s3/pricing/
About equal. Both of us are wrong there, probably. But I bet the cost of storing the images is not that much less. Think about the number of images uploaded that see little data transfer.
You've got a point. Your reasoning is wrong and greatly exaggerated, but yeah I bet bandwidth costs a fair bit more but it's not like disk space is 'nothing'.
Actually, the erroneous assumption is that storage costs $500/GB. It doesn't. Here's what Amazon charges for (economy, non IOPS-class) storage:
http://aws.amazon.com/s3/pricing/
$0.03/GB/month for most expensive tier, and $0.0275 for least.
Storage pricing is complex--the main complexity is that you wouldn't run a transaction workload on S3--but you can quickly calculate that the person who originally cited the $500/GB figure simply had a massive brain fart.
here's the basic pricing for reading/writing s3: free accounts are given 20,000 gets(their terminology for reading a file) per month for 1 year: http://aws.amazon.com/s3/pricing/
are you looking to copy the data, or run analysis?
if you want to analyze it without making your own copy, you might just need to write code that can read from s3 buckets/files. I'm sure amazon has s3 access(read and write) libraries for most common languages. here's the SDK for python, for example: http://aws.amazon.com/sdk-for-python/
if you want to copy one of the data sets, this documentation may help: http://aws.amazon.com/importexport/
edit: phraseology
Maybe the bandwidth costs would be too high for them? That's a lot of data to shoot downstream on a single day for thousands and thousands of people that might not buy it.
Edit - some pricing information using Amazon bandwidth as a measuring stick:
Around the launch day, AW was reported as having sold 3,722,031 copies. If we forget any sales that have happened since then and only assume 1/3 of these people decided to download a 5gig DLC...
5gig * 1,257,343 people = 6,286,718 gigs transferred
Amazon's s3 web pricing's cheapest published price per gig is $0.05/gig.
The total cost to whoever it is that's hosting it in this case would be:
tl;dr: it's super expensive to do this.
If the AMI is of an EBS-backed instance, you pay the EBS in S3 storage rate, $0.095/gb/month
If the AMI is of an instance storage backed instance, you pay the standard S3 rate, starting at $.03/gb/month
I believe in general EBS-backed instance images tend to be smaller than instance-backed storage images, so the price for EBS instance AMI image storage instance might not actually be 3x the cost.
Here's a link to Amazon S3: http://aws.amazon.com/s3/pricing/
Stores literally terabytes of data, mature, well tested, multiple availability zones in case of failure, dirt cheap, awesome API with drivers in multiple programming languages, versioning. Why in the fuck would anyone want their data to instead be stored on the hard disks of random internet strangers?
Answer: For storing child porn and other illegal stuff that would get you locked up if you tried to store it on a legit provider. See: http://storj.io/faq.html#faq-5-1
Their own FAQs acknowledge the potential for illegal use / child porn storage. That's exactly what it will be used for. And by offering your hard disks to it, you can and will be held liable by the law if your computer is used for hosting child porn.
Ridiculous.
Currently building web apps mainly in [Laravel](//laravel.com).
We have a fairly typical wevdev workflow but I'll go over a few choice parts and then run through the entire list.
[GitHub](//github.com) for version control (of course)
[Vagrant](//vagrantup.com) and [VirtualBox](//virtualbox.com) for building and distributing identical versions of the same development box, using Laravel's DB seeder to generate mock data to simulate the data we actually want to test on production.
[Heroku](//heroku.com) and AWS for deployment, which use our composer packages to know what to install so we don't leave any dependencies out.
Vagrant has saved me so much time and hassle as far as setting up a local dev environment goes, you wouldn't believe. Do you work alone or are there many members in your team? We currently have 3 developers all using exactly the same config out of the box to get a development server up and running in 10 minutes. The whole point is everyone is working on the same development box on the same test data, which, using Composer, is mimicing exactly what will be running on our live site.
Our general workflow:
2 week sprints of:
I don't think it's just WordPress. Self-hosting videos in general is not a good idea. You run into issues with bandwidth, storage and file formats. Services like Youtube / Vimeo handle these hurdles for you. If you're set on hosting the videos yourself you should check out S3 since they'll handle the encoding, and bandwidth/storage is cheap. Then you can integrate that with your WordPress site.
Everyone has bad reviews. At $5/mo, everyone is going to have occasional downtime (some just have more than others, or at least, more consecutively than others).
Start with the providers in the sidebar, if you have issues with them then report your situation to the mods for review. Or there is S3 (http://aws.amazon.com/s3/pricing/) if you can figure out their pricing model (http://calculator.s3.amazonaws.com/index.html).
Exactly this. When I put an object on S3, I am paying a flat rate for the exact amount I use. I have full control over that object and how it will be used - it won't be datamined and it won't be shared without my express consent. Furthermore, I know that those objects have 99.999999999% (Yes, that's their quoted reliability) reliability, with global redundancy that can't be beat.
Amazon also gives you a killer SLA with everything you store there: http://aws.amazon.com/s3/sla/
Basically, when it comes to file storage, there is no comparison between these two.
Also I'd like to add that regular backups of all your S3 data is going to cost money. They charge for data transfer out. Regularly copying the entire contents of what you have in S3 out of AWS.....
Here's the pricing for that:
Check out the JW Player for Wordpress. If you're planning to use a lot of bandwidth you could check out Amazon's S3 service.
Amazon S3 has a pretty good track record for security and resiliency.
You could also consider getting something a bit more featureful which could aid your development, like a private github instance.
I'm hosting the site on nearlyfreespeech.net, and the uploaded images with Amazon Web Services (S3 and Cloudfront). They scale the price with the traffic, which is nice. Amazon charges about 12 cents per GB of bandwidth and 14 cents per GB of storage per month. NearlyFreeSpeech is nice too, it's cheap for small sites and it scales for bigger ones. It started out expensive ($1 per GB of bandwidth) but the price goes down with more traffic (which is added over lifetime, not per month).
You really need a source for that pricing. Let me counter-example; Amazon S3 costs $0.12/GB (up to the first 10TB/month). That's for a site that can have a HUGE single pipe to it, and doesn't have to worry about digging up roads/walls to get to individual houses/flats, or a lot of the other stuff that residential bandwidth deals with.
For your file storage like MP3 you do not want to store it in mongo. Any database it not good to store binaries because it will weight down your database. Use http://aws.amazon.com/s3/ to store files because it is faster/easier. You store id or song in d/b and make a call to s3 when user requests it.
Passport.js is the way to go when using node/express combo because it has good documentation and tutorials
You should think about also setting up a git repo for your project and host it with Heroku, because with this much to learn you don't want to learn server configuration as well. (look into NGINX after you complete your app).
Angular is a front end framework, don't know about Backbone/React, but angular is useful for highly interactive application that require a lot of I/O like you app does. It allows you bidirectional data binding with dom elements so as soon as one changes the other will change. Choose the MV* at this stage it does not matter.
I have never used one but there is Amazon S3, Google Cloud and Rackspace off the top of my head that you could research
Servers, content delivery and bandwidth have become ridiculously cheap in the last decade.
Orders of magnitude cheaper than the logistics of printing and moving around physical disks.
The cost of serving content from Amazon's S3 is about 9 cents per GB:
> RAID is not a backup solution! Put that shit on S3.
...You may not be familiar with RAID types or why they would be used for backups. I'm specifically using a file server using FreeNAS a modified version of FreeBSD using a RAID partician system called "RAID Z2" which for all intents and purposes is RAID 6. My server consists of 6 4TB NAS disks with the highest dependability on the market in an array with enough parity it can completely lose 2 disks at the same time without losing any data while maintaining a total of 14TB of capacity. So the only way I can lose any data is if 3 or more of my disks fail at the same time which outside of a fire is wildly improbable.
The OS also supports several plugins which allow for some offsite backups which include's Amazon's S3 as an option however I opt for a handful of important specific files to backup to Google's cloud services as well. Not to mention I've also discovered a trick to get unlimited storage out of Amazon's consumer cloud storage. S3's pricing model would run about $422.91 per month for the same capacity and would have inferior speed compared to an onsite backup.
But yeah. Disk arrays with mirroring or parity are definitely for backup.
Thanks for the correction. I'd forgotten about their reduced-redundancy options too: http://aws.amazon.com/s3/pricing/
Their sales reps tell me that there is still 'plenty of margin' -- even on storage.
For me I see a difference between sync and backup services. I use Google Drive to store docs and other files I 'use', but for the endless RAWs - they aren't something I want to access via Google Drive on the web for example, but I do want them backed up, hence S3. If I used a sync service instead, and accidentally deleted some files, that deletion is then synced which defeats the purpose of backing up!
S3 pricing is here: http://aws.amazon.com/s3/pricing/. I use reduced redundancy.
I build all my SPAs so that they don't need any server side mechanics. They are basically read-only minified files. Usually I put them on Amazon S3, before that I configure S3 bucket so that it serves static pages. To map domains (DNS) to buckets you can use Amazon route 53 and/or you can also use Amazon CloudFront CDN for better content delivery on global scale. With all this steps you don't need to host your own web server, and it is probably also cheaper.
If it's a static site, you can host it on AWS's s3 pretty freaking cheaply (you'd only get charged for storage and bandwidth). Upload files, make public, and wipe hands on pants.
IDK what kind of bandwidth you're expecting, but I'd bet you'd be under the free usage tier for your first year, and I'd think it would be less than $1/month after that.
Kind of depends on what you're storing, and how/when you need to access it.
Amazon S3 Glacier is $0.01/GB/month, so 100GB would be $12/year. there's some other minor fees for transfer etc. and it's meant for archival, not frequent retrieval so depends on what you want to do. For backup however I think it's great.
Amazon Cloud Drive is $12/year (or free with prime) and offers unlimited photo storage, along with 5GB for non-photos. Or $60/year for unlimited.
>They have a camera that lasts for 12 hours, which is more than enough to capture one whole shift.
Once again, it doesn't record for 12 hours, they still have to activate it just like they do now.
>Amazon's cheapest S3 storage offering is $.01/GB/month. Let's say a department has 100 cops on duty 24 hours a day, that's 2400 hours of video per day. If each video has a bitrate of 8 mpbs, that's 28800 megabits/hour, or 3600 megabytes/hour, or 3.5 GB/hour. That's 8,437 GB/day. Let's say the police department is expected to hold onto these videos for a year, so 8,437 * 365 = about 3M GB, or about 3 PB. This works out to about $30,000/month when stored on Amazon S3. That's pretty reasonable for a police department with 100 officers on duty all the time, to completely eradicate excessive force. I say it's worth it.
Again, you are massively underestimating the storage requirements. That info you posted earlier was for way lower quality video than they use.
They have a pricing guide. My estimate was based exclusively off of downstream, as it looks like that would be by far the major contributor. To figure out an extremely approximate price, I took the price band that the majority of the bandwidth would fit into, multiplied the per-gb figure by the number of gbs transferred, and then divided it by the expected revenue if all buyers pay $50 for the title.
The expense of the bandwidth is going to be a little under 7% in reality, because some bandwidth fits in different price tiers, but that figure doesn't include anything but actual transfer costs.
Pretty sure that's incorrect, you are billed monthly based on how much data you have stored on their servers: http://aws.amazon.com/s3/pricing/ You are not charged for uploads but have to pay for downloads.
> The problem with having cameras is that you need to retain the video for years because of the FOIA. It gets super expensive quite quickly.
You should be using EC2 instances as your processing nodes and S3 as your storage service. Check out http://aws.amazon.com/s3/ As for a shared file system using Gluster or creating your own Windows Share would work as well.
You could sign up for Amazon S3 for and host a basic HTML site there for 1 year for free: http://aws.amazon.com/s3/
Amazon has a guide here: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
Sorry I didn't check and just realised now that this is Techsupport and not sysadmin so my answer was a little sparse of useful info - S3 is secure enough (have a look at the security part of the FAQ here ) However you will have to use thier tools to get the data up there and back, which would take a little bit of reading as opposed to the intuitive drag and drop folder mechanism that Dropbox uses. That being said, the process isn't too bad to utilise, check out the quickie user guide here for an overview . The short of it is that you can create access lists, and have it encrypted while up there if you want
Host a site somewhere cheap - get an AWS S3 bucket and set up a DNS record to point a subdomain of your site at it.
Host podcasts there and host site on the cheap as chips hosting.
Downloads from S3 are lightning quick and reliable so it means if anyone even bypasses your site to use the feed at itunes or what what, then its dead fast.
Cloud. For example, here is Amazon's S3 pricing: http://aws.amazon.com/s3/pricing/
You need to provide the sofware to actually get the data to the S3 account. I've been using Jungle Disk for years.
You're just looking at the pricing for storage but not I/O or transfer (http://aws.amazon.com/s3/pricing/). Granted it'll still come out to less than a dollar in most cases. My whole goal with my blog project was to create an entirely free solution. In the process of doing so I learned a new skill that I thought other people might like to know about. It keeps all my files in one place and is easier for me to manage. If you don't want to use it no harm no foul.
$2.12 I don't know what I was expecting, but I wasn't expecting that. I use AWS for all the images on my website, because after doing some testing there was a notable difference in speed. I used the free tier for one year, then on November 1st, I started having to pay. I didn't really pay attention to the billing amounts in the past because I wasn't paying but I was expecting a much higher price.
Also notable, this is the biggest month I've in the history of my website, I hit the front page of Hacker News 3 or 4 times, and the front page of Lifehacker and Adafruit industries blog. Most of my days were around 5k uniques, with spikes up to 20k with the plugs from the big sites.
All I can say about this is if you're thinking of using S3 for your CDN to make your website faster and save hard drive space, don't let monetary cost be an obstacle. This is a hell of a bargain, that's why I'm sharing it today.
You can't unless you put it behind a load balancer*, and then you're paying $18/mo for the load balancer. ELB is great if you occasionally need additional instances to handle traffic spikes or need to fire up instances in different availability zones without mucking about with the DNS.
But you have to have something running 24/7 to know that there's traffic coming in.
If all you're doing is serving files, you could use S3 and it would be a lot cheaper than paying for another VPS.
FWIW, your first EC2 micro instance is free for the first year and only about $15/mo running 24/7 after that.
* I say can't, but if you had AWS command line tools running and another web server, you might be able to write some kind of callback that would boot the server using the AWS API but then you'd have to wait for it to boot up, etc..... S3 is just a lot more practical.
Dropbox will die within hours if people find your game interesting enough to download.
Look into Amazon Web Services and get yourself an S3 server. You only pay for what you use. If traffic is low you can get away with paying pennies per month. Your first GB per month is free, then it's $0.12 per gig up to 10 TB. After that the price goes down to $0.09 per gig up to 40 TB and so on...And you get to set an email alert should things get expensive and you want to shut her down (you choose the dollar breaking point).
This is probably better if you are selling some indie game or making some sort of profit per download. If your game was one GB and you were charging $3.00, you'd still be taking in $2.88 per download.
Look into it. I think it's a pretty good deal.
Edit: some minor alert detail.
Dropbox uses Amazon S3. 50 GB with Amazon directly is 0.093 (plus minor transfer fees) = $4.65 per month versus dropbox $9.99 Amazon has even lower rates for more storage, but you only pay for what you use. There is free software so you do not have to use Amazons web interface and can have it as a mapped drive. These also have a 5 GB free for 1 year plan versus Dropbox 2 GB Cloudberry Labs, Gladinet etc.
If you look around a little, you can find 100mbps unmetered colo ports starting at around $100/mo. That's 30 terabytes per month, if you could use it all, or about 0.33 cents per gigabyte. Admittedly, you'd have a very hard time pegging that.
Amazon S3 charges five cents per gigabyte at their fourth-highest tier. The curve seems to indicate the price would continue dropping in larger bulk, but the next tier prices aren't publicly available.
So, when I say "cheap", I mean "cheap".
Does anyone know how this compares to Amazon AWS? I know AWS isn't technically one array, but they list rates up to the 5 petabyte range. If one customer could theoretically rent out 5 petabytes (costing over $275,000, mind you), you'd think they'd at least have the capability to hold quite a bit more than that.
> And there's the problem. Bandwidth does not cost 15 cents per megabyte. An episode of a TV show starts at 300 MB (for standard definition), and HBO/Hulu/etc certainly don't charge me $45 each. Hell, I could (contractually) use 250 gigs of bandwidth each month, and my bill doesn't quite come to 38 thousand dollars.
You don't even need to go that far. Amazon has a cloud storage service called Simple Storage Service (S3). That's the price at which Amazon sells their own bandwidth to third parties (so you'd expect it costs them less than that).
What's that price you say? Well it's on a sliding scale (the more bandwith you use, the less you pay per gig), but it starts out as $0.12. Per gigabyte. That's $0.00012 per megabyte
I recommend using a cloud hosting service such as Dropbox, Google or Amazon.
I have found this to be the most practical solution to reasonably secure your data. No need to worry about redundancy, or even multi-site backup. Unless you have lots of time and nothing better to do, I think cloud backups are the way to go.
This seems like a very cheap way to do web hosting. I wish I had a better handle on statistics for my personal portfolio website to be better able to do calculations.
I calculated 200 visitors a month, they look at 2 pages on average. I'm going to guess this is 1-2 GB of outbound transfer. How many GET requests? No idea. Maybe 20 per user--pulling out of the air--so 4000. So let's grossly overestimate 100,000 GET request.
I store less than 1 GB of data. I probably don't even transfer 1 GB per month. For Calculation, I'll say 2 GB of outbound transfer.
This plan comes out to $.65 per month (note: turn off the Free features to get this price), which is less than the $4 per month I am paying now.
I would imagine that Amazon S3 should be faster than my large hosting service.
Right you are, I meant hypocrisy.
I assume you are referring to this which actually cites charges as low as 8 cents per GB. Are you really trying to equate Bell to AWS? Bell is trying to charge $2/GB remember? And there is further no charge between Amazon EC2 and Amazon S3 within the same Region or for data transferred between the Amazon EC2 Northern Virginia Region and the Amazon S3 US Standard Region.
Oh, and my favorite part about AWS:
>Pricing
>Pay for only for what you use. There is no minimum fee. Estimate your monthly bill using the AWS Simple Monthly Calculator. We charge less where our costs are less, and prices are based on the location of your Amazon S3 bucket.
Now go fellate yourself.
On a Mac, I use the built-in Time Machine backups that automatically backup my entire system. Really easy to restore your entire computer but also retrieve specific files from the past.
I also backup important files to the cloud using an app called Arq: https://www.arqbackup.com
I use Arq to backup to Amazon's S3 storage: http://aws.amazon.com/s3/
not really hidden, but yes there are. Glacier Archive and Restore Requests - $0.05 per 1,000 requests from S3 Pricing page
Depending on what you're archiving, it might be easier and/or more cost efficient to just keep it in S3. For example if you're archiving tons of small files each month, you're getting charged $0.05 per 1,000. But if you're archiving a handful of large compressed files, then your storage costs will be cheaper.
What do you capture in? What type of file size you dealing with? What kind of internet do you have? These are all very important questions because if the data is that vital I would look into off-site ENCRYPTED storage space.
My personal favorite is Amazon S3. I like it because I only get billed for what I use so I am not committing to anything upfront. Plus they give you 5 gigs for free.
Have a look. http://aws.amazon.com/s3/
As for getting the data there you can a sync which will only upload the changed data which would doable after the first upload.
Just my 2 cents for ya.
Just fyi I think hosting images for sites on any of those platforms is considered a violation of their terms. So they might close your account/take down the images.
Amazon S3 has a free tier and even it's priced tiers are pretty reasonable. http://aws.amazon.com/s3/pricing/
From the S3 FAQ:
> Q: What data consistency model does Amazon S3 employ? > > Amazon S3 buckets in the US Standard region provide eventual consistency. Amazon S3 buckets in all other regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
This means that new files are available as soon as they present themselves for new files but overwriting files will be available to everybody eventually. This means there could be some lag time between a new file and the old one.
Even though it would be slightly more expensive (your S3 costs would be around $15-$30/mo most likely), I think breaking out the XML data structures into a DB is your best bet. You would gain the full benefits of consistency. This, of course, means a lot more labor is involved unfortunately.
You could store the entire XML document in the DB and gain the consistency you need as a quick and dirty solution.
They have a camera that lasts for 12 hours, which is more than enough to capture one whole shift.
Amazon's cheapest S3 storage offering is $.01/GB/month. Let's say a department has 100 cops on duty 24 hours a day, that's 2400 hours of video per day. If each video has a bitrate of 8 mpbs, that's 28800 megabits/hour, or 3600 megabytes/hour, or 3.5 GB/hour. That's 8,437 GB/day. Let's say the police department is expected to hold onto these videos for a year, so 8,437 * 365 = about 3M GB, or about 3 PB. This works out to about $30,000/month when stored on Amazon S3. That's pretty reasonable for a police department with 100 officers on duty all the time, to completely eradicate excessive force. I say it's worth it.
>There are large technical issues you're glossing over here.
There, not glossed over anymore. Happy?
If you are on AWS then the easiest and safest place to store backups would be on S3. You can simply create a 5GB EBS volume (assuming your data size is 3GB), and snapshot the entire volume to S3 periodically. I would say that S3 is in fact one of the safest (in terms of durability) places to store data on the Internet, offering 99.999999999% durability and 99.99% availability of objects over a given year (source).
You don't want Carbonite. It is not meant for long-term storage and will delete your files after 30 days. You could use Amazon S3 or maybe OneDrive. They are very low cost.
> Designed for 99.999999999% durability and 99.99% availability of objects over a given year.
http://aws.amazon.com/s3/details/#durability
No system is 100% reliable. S3 is probably more reliable than whatever else you're using.
yes 5TB per file, over 5GB needs multipart
http://aws.amazon.com/s3/faqs/
"The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability."
API docs: http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
It looks like I would use s3put from python boto, for multipart you will probably have to pip install filechunkio
I think this thread (https://news.ycombinator.com/item?id=422225) is good to explain why I use Amazon S3 as a major option for this service.
BTW, based on the price of Amazon S3 (http://aws.amazon.com/s3/pricing/) is no harm to give a try. :-)
Thanks for asking the different option, I would think about it.
FYI, it is easy for you to implement different storage option yourself. If you want, I would like to help you on this.
I use duply doing incremental/full backups to S3, dumping the database before each backup with the pre
script, configured by Ansible.
I love it -- super simple to set up, has been extremely reliable, and the s3 storage (for my use cases) is almost trivially cheap. I'm backing up 5 servers for less than $1 a month.
You can host you image files on Amazon S3 (http://aws.amazon.com/s3/), and it is more reliable (99.999999999% durability and 99.99% availability), also, you have fully control on the images.
If you decide to go with Amazon S3, then you can check this project https://github.com/images3/images3-play, which is an self-hosted image hosting service on top of AmazonS3.
That's more or less what I was getting at. There are two major technical challenges, one solvable.
The solvable challenge is: People keep coming up with attacks on full-disk encryption based on things like in-place access and updates, and knowing the size of a file can tell you a lot about it. If you have full root access on something that people are storing and frequently updating encrypted data on, you may be able to figure out something about what they're storing.
But you could still use this as the equivalent of tape backup -- instead of storing lots of little files you update all the time, you just store one gigantic file, and delete it when done.
In other words, this would work a lot better for a service like Amazon Glacier than a service like Amazon S3, and it'd work better for that than a service like Amazon EBS.
The other challenge, though, is: What do you use the CPU for? Or, if the CPU is mostly idle, how do you generate enough heat with basically just disks to heat a house very much? I'm not sure how to solve that one. I guess you could just run something like Folding@Home on it, but since you're paying for the extra power that takes, how do you justify that cost -- who's paying you to do that? Or you could generate litecoins of some sort, but are those ever actually profitable when you factor in the electric cost?
Hey man, fellow Canadian/podcaster/musician here (I know them Canadian winters and projects are a necessity :).
I don't personally use Squarespace for my show, but I know a few other guys here do. Regardless, it generally isn't the best idea to use your web hosting as a place to serve your podcast files though (it uses a lot of bandwidth and can put a strain on your webhost if you gain popularity). I use Amazon S3 to host my MP3s but I hear a lot of folks here talking about Libsyn as well. Podbean is another popular option. S3 is crazy cheap unless you are serving LOTS of files. The general idea is that you have a separate space that serves out your large MP3s and link to those files from your website.
Most of the services you mentioned (iTunes, Stitcher, etc.) run off of your RSS feed. That is usually generated from your website (E.G. I use Wordpress, post a new episode by linking to the file on my Amazon S3 account, Wordpress automatically updates my RSS feed, which then automatically updates iTunes, Stitcher, etc. etc.). It seems confusing at first but just Google around and it'll start to make sense.
There are a bunch of Skype recorders available depending on your OS. I use Audio Hijack Pro.
Hope that helps a bit bud. Good luck to ya!
CDN is for reused static components, not video files that are watched one time. You'll be paying for something that has no benefit to you.
Instead I'd recommend Amazon S3: http://aws.amazon.com/s3/pricing/
You'll probably find those prices within your budget.
For domain registration, I just like Name.com since they support a very wide range of TLDs and are very fairly priced. Others I like are DNSimple and 1&1.
As far as webhosts go it all depends on how your site works. If it is static HTML then I suggest Amazon S3 but if you're using something like Drupal or Joomla then I recommend [1&1](http://www.1and1.com/ or BlueHost.
Amazon S3/CloudFront and SoundCloud are both great options, it just depends on what you want users to do. If you want them to be able to download then S3 is the way to go, but if you want them to listen to your music online then SoundCloud is the way to go. Of course there is no reason that you can't do both, embed a player from SoundCloud and add a download link that points to S3.
Note: I don't use SoundCloud so there could be an option for allowing people to download the music file right from SoundCloud. I would suggest looking into that since SoundCloud is free.
You can use DO for your php, but I would use S3 for storage, perhaps query string auth:
http://aws.amazon.com/s3/faqs/#How_can_I_control_access_to_my_data_stored_on_Amazon_S3
Let your application and storage scale independently. Do you really want to upgrade your DO droplet simply because you need more storage?
Amazon S3 is pretty nice. Not as user-friendly, but much more flexible and there are tons of open source clients to chose from. There is no data limit, and you only pay for the amount of storage you use. The rate is between $0.03 to $0.01 per GB per month and the first 5GB is free. Info here: http://aws.amazon.com/s3/pricing/
Did they mention S3 price drops at all? Thanks!
EDIT: http://aws.amazon.com/s3/pricing/effective-april-2014/
For the first TB, now down to $0.03/GB for standard storage, $0.024/GB for reduced redundancy.
http://aws.amazon.com/s3/pricing/
$.12/year/gig for glacier and that comes with most of the infrastructure you would need. That should set a reasonable ceiling around 1/4 of what you quoted. even at ridiculously high resolutions.
Maybe your company isn't the best provider for this.
Your price per month = $0.085 / GB * [The # of GB of storage you're backing up]
from http://aws.amazon.com/s3/pricing/
for 1000 GB, that's $85/mo or $1020.00 / year
> It black boxes a piece of the puzzle in such a way that you no longer need to worry about
As long as that black box doesn't open you up to security vulnerabilities or downtime. But hey, you can get some free credits to keep using their service is that happens; so, we're good, right?^1
Here is there pricing table. http://aws.amazon.com/s3/#pricing
I have a couple of 10 minute videos that get like 100-1000 plays a month and my bills are < $1.
Amazon is pretty much the cheapest video hosting solution. A lot of the other players are just reselling Amazon S3 anyway.
If you don't need the additional reporting features of other video hosting providers, just go with S3.
Here's what Amazon charges. Note that at the cheapest, if you're doing 5,000 terabytes a month, it's still $.055 per GB. TF2 is over 10GB. So, if Steam is running at 5,000TB/month (which they very well could be) and using Amazon (which they very well could be), it costs them $.55 to give you that game. Not a lot for one user, but 550% higher than $.01 (and also the best case scenario for the US alone).
To be fair to S3, the comparison is between apples and rotten oranges. A PUT in S3 does a round trip to the server, and offers you some stated guarantees about what others see. A PUT in MongoDB is guaranteed to ensure you data makes it into your local socket buffers. Further, the GridFS API doesn't even use getLastError to ensure your file is inserted, or even that the data will appear before the metadata (although this last point is not necessary if client ops are guaranteed to happen in order).
S3 Source: S3 FAQ
MongoDB Sources: MongoDB Docs and client/dbclient.cpp
and client/gridfs.cpp
in the MongoDB 2.0.5 source distribution.
Edit: Digging in the source a little more, MongoDB doesn't even guarantee that the PUT will make it to the kernel: it may be buffered in userspace first.
Correct.
All our systems are Linux or BSD. It was very easy for me to set up because I was already using <code>duplicity</code> and that has support for using S3 as a backend. I keep a minimum of three months worth of backups, everything is automatically backed up nightly from cron. If you'd like me to share the script I use for running the backups I'd be happy to, but it's probably only useful in UNIX land.
You get a certain amount of storage (a few GB at least) for free – if you have an amazon account you can just log in at http://aws.amazon.com/s3/ and start playing about with it.
Put it in Amazon's Cloud. If you only leave your data there for the amount of time it takes it set up your RAID it'll be dirt cheap.
And then stick a line on your resume explaining how you leveraged the cloud. Leveraging clouds is a key web 2.0 core competency. </snark>
Seconded on S3, it's cheap, reliable and not too hard to use. You can use this page (scroll down to Data Transfer pricing) to figure out what you might pay. Depending on how big your game is, you're probably talking about paying $100 for every million non-cached game launches.
Generally if you have that many people playing it, then you can find a way to make a profit; either by charging for something or selling the whole thing to another company.
I don't think you'll be able to get much free hosting of pictures of 1GB each. That amount of storage costs a lot. You could look in to creating a storage bucket on Amazon S3 and host your files there. You only pay for what you use, too. So you won't be getting ridiculous bills every month.
Edit: Amazon also have a free account, check the link above. 5GB storage and 15GB trasfer for free a month. Then you just pay on top of what you've used.
> We're probably using an honest 3.5t right now including everything. So the 776.7mB/s is an extreme exaggeration even in worst case scenario. We'd send daily incrementals and weekly / bi-weekly fulls to local storage, then shoot them off to the cloud as needed.
Let's say you just shot your weekly fulls into the cloud. For every 1 TB, you will be sending 8 Tb / (86400 seconds per day * 7 days per week) = 13.9 Mb/s continuously into the cloud. At 13.9 Mb/s, 1 TB of data will make it to the cloud in one week. Adjust based on how much you will be backing up. And based on S3 pricing it will cost you $120 per TB to pull it back down. You have to hit 10 TB before the rates start to drop. Does this really sound like a good idea?
Even if you end up building some form of fast local storage, buy two tape drives (you don't need a library) and put those fuckers somewhere safe.
> The past few companies I've worked for have abandoned tape because it was too much of a pain. I feel the same way.
How do you plan on scaling your solution? As your data requirements grow over the next three to five years how much are you going to spend on hard drives in your sled for off-site storage? Do you have budget for drive failure?