from the ToS:
>57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Stuff like Elastic Load Balancing is definitely a thing though. You don't have to buy a fuck ton of servers to support load spokes any more.
Like you said though, nothing is ever simple in software engineering. If they weren't already using something like AWS, it's not the easiest to move.
From the page I linked:
>Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without requiring manual intervention.
This is actually just reddit being mismanaged. They use Amazon AWS cloud for hosting; it should automatically be scaling the number of servers and load balancing on its own, depending on the traffic pressure.
In the price tier that Reddit is in (aka the major tech website price tier) Amazon even provides a dedicated team of specialists to keep the site up. The only plausible explanation is Reddit is managed completely incompetently and/or the software is written poorly.
I mean, Facebook and Twitter use cloud hosting and have way more traffic but don't get annihilated like Reddit does. There's literally no social media website out there that crashes and burns like this, aside from Reddit.
source: working on my own social media thing in the cloud, develop software for a living too
edit: just my opinion - it's not just unacceptable, it's just flatly ridiculous that a user needs to refresh 10-12 times to see any content.
Actual text from terms of service of AWS/Lumberyard: (emphasis mine)
57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
AWS rents out GPU based instances:
p2.16xlarge -- 16 GPUs in one instance. A SHA-1 computation farm is within anyone's reach, you don't have to be a government or even a large corporation.
AWS baby, thats the magic of it. If I remember correctly Netflix is the same way as its running purely in AWS last I heard.
Between route53, ELB, auto scaling, and health checks there is no real need for network gear in this enviroment. AWS pretty much manages all of the connectivity between all the services themselves in the regions. However this really isnt a surprised as its just a public website being hosted somewhere else.
For those who arent aware, there are Cisco virtual routers you can run if you have the need for it so dont be too dishearten
and there is some network knowledge you need to have when working with VPN connections and direct connect
Holy crap, I was trying to figure out how to report it to Amazon...
Edit: Still an issue, but only for Postgresql
Edit2: I tweeted @awscloud letting them know.
> Prohibited activities or content include:
> Offensive Content. Content that is defamatory, obscene, abusive, invasive of privacy, or otherwise objectionable, including content that constitutes child pornography, relates to bestiality, or depicts non-consensual sex acts.
> If you become aware of any violation of this Policy, you will immediately notify us and provide us with assistance, as requested, to stop or remedy the violation. To report any violation of this Policy, please follow our abuse reporting process.
Derek, I'm gonna let you in on a secret. Every major publisher launches multiplayer games on AWS. EVERY single one. If it wasn't saving them money, they wouldn't do it! I can't mention exact ones because I legit AM under NDA and happen to like my job, but I oversee large system launches as part of my job. Did you notice how many sites went down with the S3 outage? More runs on AWS than half the AWS employees even know. Again, you are an idiot and don't know what you're talking about. But please show us how colocating servers that run idle for years since your "games" don't even make it to the bargain bin is more cost effective.
Edit: also wanted to point out that lots of indie small games have launched on AWS, along with a huge number of mobile games and game for all platforms. Do they all spend big like the big publishers? Of course not. They don't need to. But you better let them know how much cheaper colocating some "Dell Xeon" servers are! Save us from ourselves again Derek!
Edit 2: Derek! Look at all these industry idiots using AWS for games! This was years ago admittedly,but maybe one of us shitizens will let you borrow our time machine so you can go warn them.
Yes, I have. A few tips:
Netflix runs on Amazon's services to run their servers anyways. [link]
EDIT: I know everybody uses AWS. Just pointing it out for people who don't know.
But I'll warn you: Amazon is already using the term "Glacier" for "cold storage" of data: [link]
If I were you, I'd strongly consider changing the name to avoid any trademark issues; legally you may very well be OK, but lawsuits are expensive even if you're in the right.
You do actually use Amazon products, you just don't know. [link]
King County (their website/services)
You might not use their retail services, but that's not even their largest money maker any more.
Pay per usage is fine if the cost is reasonable. Consider what Amazon charges for IO. The most is $0.09 per GB for data transferred to the internet. I am perfectly willing to pay that or more + a flat fee for infrastructure maintenance for my home internet connection.
Just don't lock me to 300 GB, or something.
I wouldn't say they are trying to be a tech company. Amazon is by far the biggest player in cloud hosting and the fact that Re:Invent sold out so early compared to last year kinda proves how fast AWS is still growing. Netflix, League of Legends, Adobe, the MLB, and a bunch of other companies all use AWS in some capacity[Source].
I would be very comfortable in saying that Amazon is one of the biggest players in technology
Dropbox has been in a weird position for a long time. They are essentially entirely dependent on Amazon S3 as their storage backend, which means their storage costs are always going to be more than a competitor like Google or Amazon who don't have to pay a premium for storage.
Dropbox has managed to at least partially get off of Amazon for bandwidth by getting a Amazon DirectConnect connection and buying (some of their) bandwidth wholesale. And if they want to they could colocate servers in a datacenter near their Direct Connect connection and do all the server-side hashing work on their own systems. But for storage, which is probably their largest expense, they're kinda stuck.
But at the end of the day, they're not going to be able to compete with Amazon and Google on storage allowances without significantly restructuring their infrastructure at a large expense.
> Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
You can even ship a drive to AWS. Often the fastest way to get a few terabytes of data into/out of S3
The Java example is the very first Java example provided though. The problem is that there is no simple Java example that doesn't use the Flow Framework. So your very first Java example linked from the SWF tutorial listing is this ridiculously complex sample that unfortunately isn't great for learning SWF.
>The praising of the C# Tutorial is a non-sequitur as well, as there is no such thing, it's a blog post that's not linked to from the SWF documentation at all.
It's a blog post but it's linked right below the above discussed Java example in the listing of tutorials - "See the AWS SDK Team's blog on getting started with a sample Amazon SWF application using the AWS .NET SDK."
Their SLA policy is here:
TL;DR: you're entitled to a 10% S3 service credit for this billing cycle. To claim it, you need to submit a ticket to support with logs showing that you were impacted by the outage.
> Case in point, I am considering using the service for remote backups, but would want to retrieve the majority at once in case of need... Now I need to redo my sums ;)
You should consider the Infrequent Access Storage Option on S3. It's somewhere between S3 and Glacier:
The main good thing about Infrequent Access Storage is that it's not as complicated as Glacier and easier to calculate.
It'll take some time for them to write up what happened, but if they follow their usual procedures, I'd expect them to share an account of the outage.
These are the types of post-mortems they generally issue:
Here's the real reason. There's no server hosts in SEA that Valve works with yet.
Explanation: I worked at a large company that used a lot of computing. It was actually cheaper to use cloud host services by Amazon, since they do a great job at it. Thus we didn't need to keep physical servers and can scale up or down with ease. So places that have Amazon servers usually have dota servers at their location, since it's pretty easy to set up.
Philippines does not have a AWS (Amazon Web Services) location. Japan and Aussie does. Valve needs to start hunting for server hosts in Philippines.
AWS Import/Export Snowball
>Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud.
FWIW, they are likely specifically referring to BigQuery. It's an append-only datastore that can really crunch data heavily.
Loading data into it is also fast because of the insane network speed from the compute instances.
AWS has Redshift, which is also stonking fast. They also have some really cool stuff like Data Pipeline which can do scheduled ETL (using your jobs) from any data service to any data service (e.g.: Hadoop to Elasticsearch, MySQL to Redshift, Oracle RDS to PostgreSQL RDS...)
All the cloud offerings are pretty cool, and taking a few weeks to really learn their capabilities is worthwhile.
AWS has recently been taking the 'throw everything at the wall' approach by offering seemingly every service possible, for me at my usage level this is perfect.
GCE takes the 'our offerings are flawless' approach. They don't offer as much (but are expanding), but their stuff is locked down tight. Also, if you need fast network (1gbps+) GCE cannot be beat in this aspect.
If you're doing this professionally and want to save tons of time, consider renting a server. Even AWS, as expensive as it is, it's pennies compared to your hourly rate. For instance, you can rent c4.8xlarge, which runs on 18 cores, 36 vCPU, 60GB RAM, and it's only $0.4131 per hour (spot pricing).
From the FAQ:
Q. Is it okay for me to use my own servers?
Yes. You can use hardware you own and operate for your game.
It's referring mostly to services similar to AWS.
Incidentally, I see Polygon has already managed to put the most negative spin on this development they could possibly manage.
This office will primarily be for the Amazon Web Services (AWS) sales and marketing team. AWS is one of their fastest growing segments and they need a local presense for client meetings/presentations and for a place for the sales team to hang out.
I've had various conversations with AWS employees and they were barred from even having meetings in Illinois until this past January (or February?) since Amazon wasn't collecting sales tax. Basically Amazon was telling Illinois that they weren't conducting any business in the state, and this meant no face-to-face meetings. Now that Amazon collects sales tax in Illinois, they can have employees and conduct business meetings here.
I also expect an AWS Loft space like they have in San Francisco, New York, London, and Berlin to open up in the next 12-18 months.
PSA: This game engine limits you to either your own servers or Amazon's Web Services for multiplayer in your games.
And also, the terms of service of AWS/Lumberyard: (emphasis mine) are pretty funny:
> 57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Amazon DNS service.. Makes life nice that all your DNS is using anycast from the nearest Amazon datacenter.. We literally waste more time/money reviewing and forwarding the bills to AP, then we spend on the DNS service.
I don't pay by the GB but I do have various options for how many bits per second bandwidth I want to buy. They make it a flat rate for a given speed because customers like it that way, but they set the rate high enough so it's profitable. In fact, Time Warner's broadband profit margin is 97%.
Edit: But businesses serving content often pay by the GB. Netflix for example is hosted on Amazon's cloud, which charges about a nickel per GB for data transfer out. An hour of high-def is about 1.7 GB so every time you watch a one-hour episode of something it costs them about 8 cents.
Yes they do. However, it's applied on a per-billing cycle basis (i.e. monthly).
This outage would fall into the category of ">=99.0% but less than 99.9%" uptime for the cycle, which results in a 10% service credit on your S3 spend during that month.
Also, in order to receive the credit, you have to provide logs to their support team to prove you were affected!
Amazon web services offers many datasets and you can spawn an instance with the dataset as a mounted volume. You'll still need to figure out how to work with it, but quite a decent selection to mess with.
So you need to pay for storage space in S3, then the additional cost of data transfer.
250GB... US$6/mo. [link]
Data transfer... 1GB/day or 30GB/mo.
S3 on its own, in Europe, 30x.09 = US$2.70.
Putting Cloudfront in front of S3, counterintuitively, actually looks to be cheaper. 30*.085=US$2.55.
So your ~US$130/mo for your current hosting is... not great.
EDIT: I misread. Corrected numbers.
There are whole services based around this idea of archival storage. For very long term storage it often surprises people to learn that tape is still the most cost-effective solution. (Albeit with very long delays which is fine for applications where you run a query and then expect the result in a few hours).
Since Amazon Lumberyard is based on the CryEngine, completely free, includes source and with Additional Features, they had to come up with something good (or it wouldn't make sense to allow Amazon to modify their CryEngine)
I wonder (and hope so) if it has something todo with their VR First Programm for universities... maybe a fancy VR editor? Design & code in VR would be very nice!
Not if you pay for an Amazon snowball. Those babies can do like a Petabyte per week.
That is, assuming he's running his OWN PMS in AWS.
This may be way off base, depending on your needs, but have you considered a cloud computing platform, like Amazon EC2? They now offer GPU focused instances: [link]
This would save you a lot of capital expense, and depending on how much compute time you need, might end up being much cheaper in the long run.
Set up a cross-account access role rather than using the root account credentials. The top-right corner of your console will indicate the name of the account (you choose an arbitrary name + color coding).
It would have taken a few seconds of simple googling and if you had been observing industry news it was a big story at the time. Someone not giving you tons of links to prove it doesn't mean it isn't true and its also apparent you just wanted to be a naysayer without any researching yourself.
You can be skeptical or curious but if you're relying on others to do a simple google search try to avoid saying things are incorrect if you don't have any pre existing knowledge.
Yelp, Netflix, NASA, CIA, FBI, FDA, FINRA, Healthcare.gov, Nokia, Comcast, Conde Nast (and reddit), Intuit, etc.
They're not all listed, some would prefer to fly under the radar (like the FBI/CIA and other DoJ agencies) but a significant number use Amazon Web Services.
Which is, incidentally, what Amazon is likely doing about this. From their postmortem:
> While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.
Sounds like they're adding a --I-understand-that-this-will-destroy-a-whole-zone flag.
AWS has a quick-start for VPC architecture that you can look at to see how they create a full stack with subnets, route tables, etc. (
[link]). You might also look at the other quick start examples they have available. All of the quick starts have sample templates you can look at and see how they define the details and properties for each resource.
Terraform is (I think) one of the best alternatives to CloudFormation ([link]). You might look at that and see if it does what you need, however, that will then introduce something else to learn.
Hopefully this helps!
What a world when we can use a server (probably) in California to communicate with people all over the Valley to discuss a power outage we're currently experiencing.
I guess since phone lines are separate from power, this sort of thing has been theoretically possible as long as I've been alive, but it sure feels different.
It's CryEngine, but the main reason is to sell cloud computing power on Amazon's network. If the next big game needs AWS to run the servers, Amazon will be coining it in.
Sign up for a Free EC2 instance
When your on it type
sudo apt install unzip
unzip rclone-v1.30-linux-amd64.zip cd rclone-v1.30-linux-amd64
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
Then use by
Follow on screen instructions to setup for Google Drive
rclone copy $REMOTENAME:$REMOTEPATH ./
rm -rf $FILE rclone-v1.30-linux-amd64.zip
rclone copy ./* $REMOTENAME:/Extract/
I think it comes down to motivation, time and ownership.
Games have quite a few things against them:
They can be pretty complex which makes them costly in terms of time and effort.
Games do not solve issues as a tool you would use frequently
The people who enjoy making games are not necessarily the people who enjoy playing them. For instance I enjoyed a lot working on my game engine but didn't care much about a story. I was more attracted by technical challenges :)
Games relying on stories have a very limited life time from a player perspective (comparing to my text editor or email processing which I have been using for decades). So what happens after the game was finished? Why would people come back to it?
A tool is a tool. No one cares about the feelings as long as it fulfill a need. But games are also touching on the arts. And people start to think about "their" story, "their" artwork. And also why would they want to implement your vision when they can implement theirs?
All of this makes it difficult to end up with something polished and nice.
That said there have always been quite a few tools and framework for open source games such as SDL, PyGame, Crystal Space, Ogre3d, JavaMonkeyEngine and more recently [link]
a "VPS" is a virtual private server, it just means a you're using a virtualized and sandboxed section of a larger server instead of having your own full server hardware, if you're not big enough to need it. AWS offers VPS'
Edit: More info, AWS link
Yep. And what Derek fails to point out is that this move could help CIG and backers in a number of ways:
I'm told that Lumberyard has FULL VR support already built in. [link] . Iirc the CryEngine version that CIG were using wasn't fully supporting it, thus would require a lot of work by CIG.
As "Gamelift" and "Gridmate" (Amazons instancing / session engines) allow for on-demand instancing, that could possibly help CIG to implement "private servers". Which was one of the original pledge goals.
As the engine is provided free, that helps people to create mods. Which is another original pledge goal.
>I am still going to write a blog. Considering my knowledge of LumberYard, wait.
There seems to be something missing after "wait"
"wait until I can quickly read some 'What is Lumberyard' starter guide so I can pretend that I actually have a clue." perhaps?
Here Derek, I'll help you get started with your 'knowledge':
I don't think it'll be an issue if the private server is running on your own PC.
>Q. Do I have to run my game on AWS?
No. If you own and operate your own private servers, you do not need to use AWS.
One of my biggest worries with any solution in this space is where to store the data and how to pay for the bandwidth consumed. Quick back of the napkin math indicates that the 40,000 downloads that SF2.5 is about ~12.6 Terrabytes of total bandwidth, just over ~1100 dollars from Amazon S3.
Thats before you consider wanting to give money back to mod and modpack developers for their efforts. Each download costing ~2.75 cents. Someone with better experience with AddFly might be able to comment on whether this math would work out favorably or not.
Source on the planned Linux support.
> Q. What device platforms does Lumberyard support?
Lumberyard currently supports PC, Xbox One, and PlayStation 4. Mobile support for iOS and Android devices is coming soon, along with additional support for Mac and Linux. Note that Sony and Microsoft only permit developers who have passed their screening process to develop games for their platforms.
For S3 storage in AWS, you need to add the cost of storage with the cost of transferring 2,000 GB (1GB * 2000) people.
Looking at their pricing, you would have:
Or a total of ~$180.
You can find a calculator here, (Click the S3 tab on the left to get the S3 calculator).
Nvidia's chips are being used in a variety of systems.
Notably, Nintendo's Switch is powered by a Tegra SOC from Nvidia. They could get a nice boost if the Switch takes off.
Nvidia is also a player in the autonomous vehicle market which is a market primed to explode in the next 2-5 years.
GPUs are also increasingly being incorporated into super computers and powering machine learning/AI services. For example, Nvidia chips are powering AWS P2 GPU compute instance types.
I think Nvidia will be fine.
Could they be talking about this?
If they had to build a super-secret cloud service for the CIA, then it makes sense that they would take what they learned and build a quasi-public cloud for less sensitive (but still US-only) information....
Why not sign up for a free 1 year AWS Free Tier account. Spin up your own linux flavored instance from the comforts of your home. Then SSH into instance and then run your code. Here's the tutorial on how to spin up a linux instance.
If there's anything else you think we can do better to be more proactive with our security measures, please hit us up at [email protected](PGP key).
You could use an EC2 instance from AWS (Or Azure or GCP - depending on what floats your boat). Turn it on when you need it and shut it off when you don't. I use this for gaming these days. I have a top of the line Azure GPU instance I remote in and has my games and shit (via steam). I only game a few hours a week so it's not very expensive.
Don't really know specifically what kind of processing power you need, but a 4 vcpu 16gb memory EC2 instance w/ windows is ~90$ a month @ 40 hours a week. If you only need it intermittently you can save a lot with a spot instance. GCP also has pre-emptible and the like so they may be cheaper than AWS too.
You might also checkout [link] - I've never used it though so i can't comment on the performance.
Pros: Persistent machine you don't have to manage that will always be available assuming you have internet.
Cons: No internet or poor internet is very obvious. Can make working a pain.
Anyway hope that helps.
No, it is totally fine to calculate that way. Because now you can actually order spot instances that won't terminate for up to 6 hours from AWS. See [link]
HIPAA, unlike PCI-DSS, is entirely focused on the software side of things; so the onus is on the software developer to implement the necessary encryption, access controls, and access reporting that is required to meet the HIPAA standards.
Even Amazon's page about HIPAA basically says "Uhm... Yeah... we aren't HIPAA complaint because we have no requirements... but if you want to say we are... cool. Just sign this paper and Bob's your Uncle."
1) API Gateway + Lambda + DynamoDb.
2) [link] + [link] + [link].
3) Cost for API Gateway per invocation + bandwidth. Lambda is per invokation + duration your logic runs. DynamoDB is per hour based on how much read/write request capacity you need.
4) Secure from man in the middle, yes.
It's worth noting that the source is provided, but it's not open source:
> Q. Is Lumberyard “open source”?
> No. We make the source code available to enable you to fully customize your game, but your rights are limited by the Lumberyard Service Terms. For example, you may not publicly release the Lumberyard engine source code, or use it to release your own game engine.
Just pointing this out since it appears many others were confused by the description of Amazon Lumberyard as "free, including full source code". What Lumberyard provides is not dissimilar from what other game engines also offer.
AWS is Amazon Web Services, a massive suite of cloud computing services. basically when you send an iMessage it doesn't just zip directly from your device to the recipient's device. it gets sent to a server somewhere convenient (as others have stated, sometimes it's an AWS server, sometimes it's a Microsoft Azure server, sometimes it's an Apple owned server), it's destination is processed, and then it gets sent out to the recipient. sometimes if the recipient isn't immediately available, the message/image will sit there for a bit. apple deploys encryption techniques on the sender and recipient sides, so there server never really knows what it's receiving/sending, just that a certain set of bits is going from one place to another and sometimes gets stored for a little longer.
Anticipatory of a future where virtualized computing is the norm cause high speed internet is dirt cheap and commoditized.
Edit: I'd imagine it's something like [link] on steroids. Probably also offers some advantage to the underlying logistics involved in virtual colocation.
Monthly storage costs aside, as ctolsen said, Amazon recently started offering AWS Snowball to solve the upload bandwidth issue (at ~$200/job). Seems appropriate for your use case of a multi-terabyte import.
As for storage cost with Glacier, 16TB seems to be around $2000/year. When you consider an on-premise solution, make sure you're taking daily/weekly maintenance time and physical disk/tape storage costs into account. Your concern in this thread seems mainly targeted at long-term data durability. Perhaps it's worth the slightly higher OpEx with a managed solution?
Read and fill out the form linked at [link]
Setting up reverse DNS also unblocks SMTP as they add your IP to spam whitelists too I did this recently. They are pretty quick with it but the whitelists can take a few days to be updated.
I call bullshit.
> You can retrieve up to 5% of your average monthly storage (pro-rated daily) for free each month. If you choose to retrieve more than this amount of data in a month, you are charged a retrieval fee starting at $0.01 per gigabyte. Learn more.
Extremely deceiving. They state starting at $0.01. That's bloody cheap. Then they throw in this innocuous-looking "learn more" link that provides such convoluted and incomprehensible formulas that there is nobody who can figure out how much it will actually cost.
The problem is that all of AWS's services are fairly easy to estimate costs for... Glacier is anything but.
AWS Organisations is due "soon" which might help/change things. Check the FAQ for details (as much as there are right now).
If you have enough data, it is cost effective to let them handle the shipping & transfer, also.
Not quite as cost effective as tape (at high enough volume to offset cost of the tape drives), and would be expensive to restore all at once, but a heck of a load off from a management perspective. I hope never to see a tape again.
This is from the Lumberyard FAQs:
> Q. Can my game use an alternate web service instead of AWS?
No. If your game servers use a non-AWS alternate web service, we obviously don’t make any money, and it’s more difficult for us to support future development of Lumberyard. By “alternate web service” we mean any non-AWS web service that is similar to or can act as a replacement for Amazon EC2, Amazon Lambda, Amazon DynamoDB, Amazon RDS, Amazon S3, Amazon EBS, Amazon EC2 Container Service, or Amazon GameLift. You can use hardware you own and operate for your game servers.
> Q. Is it okay for me to use my own servers?
Yes. You can use hardware you own and operate for your game.
It doesn't prevent CIG from making the excuse you state, but they can't claim it's a contractual restriction, at least.
AWS is also releasing a FPGA developer preview today: [link]
(I know probably not gonna be competitive with GPUs for deep learning, bit might be useful for preprocessing/ weak classifiers on large data sets, low latency, or just interesting for the learning experience)
Apparantly no, but it is in the works.
> We are launching AWS Import/Export Snowball with import functionality so that you can move data to the cloud. We are also aware of many interesting use cases that involve moving data the other way, including large-scale data distribution, and plan to address them in the future.
Full source is included, you're just not allowed to redistribute it.
> See section 57.2 b iii
You can make changes, and compile your own binaries based on those changes.
> See section 57.2 a
You can distribute those binaries after submitting to Amazon and getting written permission.
> See section 57.6
Here's the post you're waiting on: [link]
Ahhhhh this will improve my life. How soon till we have cloudformation support?
thats not how hosting works, sure for home users you can get 'unlimited bandwidth' etc, but businesses hosting files in scale (terrabytes) typically pay for the bandwidth
This behavior shouldn't come as a surprise. The EC2 SLA doesn't cover individual instances, or even availability zones. The SLA is only for regions, and downtime is defined as two or more availability zones unavailable in a given region. If your application can't at least be deployed across two availability zones and survive downtime in at least one of those availability zones at any given time, it shouldn't be in EC2.
More generally, it's probably a good idea to make sure you understand the SLA and what recourse you have on failure for any important service you use, especially a hosting provider.
Mate, get your ISP to set up a direct connect to AWS.
No VPN's, just a private dedicated link with SLA's they have to abide by contractually. If you have a presence in a commercial DC, check out if AWS Direct Connect is available there and then it is a simple cross-connect into AWS. Either way check it out here:
I have seen decent takeup of AWS Direct Connects and Azure ExpressRoutes with my customers. I work in Australia which has one of highest cloud adoption rates so your mileage may vary. As a bonus, you get discounted rates for data over Direct Connect compared to using the internet.
I totally agree with you. Perhaps instead of monetary donations we should work on ways for the community to share the workload in running the game. One idea would be to setup a dedicated machine to which different moderators can RDP. If managing the hardware is too hard for one person, there's always AWS which might be more amenable to a communal solution and a great use of the proposed donations: [link]
You could use something like Amazon's Glacier storage, which ~~is tape-based and~~ costs 1 cent per GB per month. At that price, storing a terabyte of data will cost you US$10/month.
The service guarantees the durability of your data and claims to be designed for an average annual durability of 99.999999999%. There's a summary here of how that's achieved.
However, a service like this can't guarantee that it won't be discontinued sometime in the next 20 years. It would still be your responsibility to move your data to an alternative location if Glacier were discontinued. If you were really concerned, you could always store your data in two such services.
Glacier is terrible for long term storage if your are even close to 1TB, let alone multiple TBs (for consumers that is, business is different).
Just storing is 1c/GB/month [link]
So at just a single 6TB external with 5TB of data would cost $51.2 per month to store on Glacier.
I haven't used it personally, but it looks like this project has code for automating the creation of AWS accounts and linking them for consolidated billing.
The way you'd (probably) want to do this is have a "master" IAM user in your main account, and then create IAM roles in each of the child accounts that allow switching to them from the "master" IAM user. Then to do things in those child accounts, you'd switch to the child account's role and do whatever you need to do.
Depending on the timing, you may be able to use AWS Organizations, which is currently in preview, but was made specifically to do this kind of thing.
Well it appears it says in the GameLift FAQ that it shouldn't be used for MMOs. Which is obvious because it's designed for session based games. So obviously CIG will use that in conjunction with the other stack. The rest of the services are perfectly fine for MMOs. So he's just picking and choosing.
Here's where it says it(just search mmo)
> Q. After my data has been imported to AWS, what happens to the copy on Snowmobile?
> When the data import has been processed and verified, AWS performs a software erasure of the Snowmobile that follows the National Institute of Standards and Technology (NIST) guidelines for media sanitization (NIST 800-88).
> Q. How is Snowmobile designed to keep data secure digitally?
> Your data is encrypted with keys you provided before it is written to the Snowmobile. All data is encrypted with 256-bit encryption. You can manage your encryption keys with the AWS Key Management Service (AWS KMS). Your keys are never permanently stored on the Snowmobile, and are erased as soon as power is removed from the Snowmobile.
So pretty much the same as any other service they provide.
Amazon offer "snowball". It's a hardened, encrypted, tamper-resistant, eighty-terabyte hard drive with an e-ink display to show a shipping address. Bandwidth via two-day UPS.
The other impractical bit of it is here:
>The researchers were also able to place atomic markers at the upper left corner of each grid, which reduced the amount of time necessary to read the information encoded into each arrangement. The device reading the grids can simply read the marker that indicates the end of a line of code, for instance, rather than slog through the entire pattern bit by bit. The automated process only takes a few hours to read or write, whereas earlier ones would take days.
Maybe this is the literal implementation of Amazon Glacier?
Lots of storage and it takes hours to get any data!
So many people suggesting virtualization focused backup solutions when you've pointed out that this is a physical system.
25 TB is a significant amount of data. If they're not giving you a budget to back it up, I would make sure you get their refusals in writing. A quality backup system for that much data could easily run $100k.
If they really aren't going to give you a budget, you could look at something like Amazon Storage Gateway to push the data into S3: [link]
Check out the video on that page called "Using AWS to create a low cost, secure backup environment for your on-premises data."
This is one of the more ridiculous things I've read in a while.
>the cost of lawyers + keeping in good legal standing
I'm guessing that cost is $0
>the opportunity cost of not pursuing high-paying jobs in finance or elsewhere, etc
...the fuck? Wait, can I sue my boss for lost wages because I'm not pursuing my dream of being an astronaut? Seriously, someone contact me about this. I was supposed to be on Mars by now but I'm flipping burgers, that's got to be good for at least $100k.
I don't even know why he's talking about trade secrets and patents here, none of that applies. This is copyright land. And while he's sort of right that there's a difference between inherent "upon creation" copyright, and officially registered copyright, neither one keeps a lawyer from going to town on his butthole. Inherent copyright only entitles you to actual damages, so the $20 he made from sales, but if you're willing to pay the lawyer out of pocket, they'll do it. However, if any of the thousands of designs he used were registered (it costs $55 and you can register everything created in a calendar year) then statutory damages start at $150,000 and go from there.
I'm guessing this guy is about 18 years old, read a wikipedia article on creative commons, figured he found a loophole and is running under his own definition of IP law. Someones gonna educate him shortly.
Edit: By the way, their web host is Amazon. A DMCA to amazon from anyone with stolen material on their site should take care of that, if you're so inclined.
Find me a comparable server that is cheaper on Amazon? A single core + 1GB at amazon is $56/mo + storage vs $10 on Linode (and I get an SSD). Amazon's 8GB server with an SSD is only twice as expensive as Linode, so it's not 7x more expensive, but the SSD speed that Amazon provides is an abysmal 30 IOPS/GB.
Not really, you just fire up a GPU cluster on Amazon when you need it, do your computing, and then shut it down. Cheaper than buying the hardware yourself, and only takes a short time to set it up.
> Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial database while delivering similar performance and availability.
To put his comment about the cost of patches into perspective: the new CDN that they use is Amazon Cloudfront. Pricing for it is fairly cheap as CDNs go...but still, multiple petabytes out per patch is a fuck-ton of data.
Cloudfront pricing is freely available from AWS: [link]
Based on this and Chris's "multiple petabytes per patch" comment, they are probably dropping about $100k per patch JUST TO GET THE DATA OUT TO US. No seriously...$20k to $25k per petabyte.
The process is still not as graceful as it should be. AWS is improving ECS, but autoscaling currently will kill an instance with tasks still running on it.
However AWS just announced a new way to mark an instance as draining which will remove a container instance from a cluster without impacting tasks in your cluster. One day this will likely be apart of autoscaling, but until then you'll need to create a custom process that's triggered by autoscaling lifecycle hooks which marks the instances as draining. This tells ECS to move all the tasks off the container and prevents autoscaling from terminating the instances until they are moved.
Here's a quick tutorial on how to do this: [link]
Hope to have some time soon to dive into this myself since this is a huge pain point for us at the moment too.
Working off what /u/download_free_ram said, another option could be to host the server on Amazon Web Services. Their compute platform, EC2, has a free tier which has minimal specs all around but would be perfect for small multiplayer games that aren't super CPU or memory heavy. I'm currently hosting one there myself as I work on it.
It's only technically free for a year, but I doubt your project will take that long. If it does, EC2 is pretty cheap after that.
So this system is almost certainly relying on Amazons newly announced AWS service Rekognition. Theyre probably bouncing around the issue because most people would be somewhat apprehensive about an ad that said "We take high resolution photos of you as you walk through the door and use constant video surveillance to monitor everything you do inside. We then keep these photos forever to ensure our algorithm works better next time" Which is likely whats going on.
Reddit's source code is availible for free:
You can get a years worth of itty bitty web servers for free:
Don't like reddit? Get the fuck out. Every minute you spend complaining on this site is just more proof that you are a passive little bitch who would rather complain that do anything.
Overwrite PUT is eventually consistent so it's possible the old object would be returned. Shouldn't return an error.
> Q: What data consistency model does Amazon S3 employ?
> Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
AWS hasn't given us guidance, AFAIK, on the consistency "settling time" at 99.99+ percentiles, whether milliseconds or possibly much longer.
Only new object PUT gives read after write consistency.
Have a look at [link] if you are interested in using AWS. Their courses are very cheap compared to any other site online and are extremely well put together (I feel like I should get a referral from them because I can't say enough good things about them. I DO NOT work for them in any way).
Also I just found this over at /r/sysadmins and it looks very promissing - [link]
One final thing, if you really are interested in working in AWS the best thing to do is practice using it. Sign up for an AWS account and find out what products fall under the free tier (just about everything) and start building stuff. You get roughly 1 year worth of t1.micro sized instances for free (1 year broken in to 1h periods).
EDIT: [link] - You get 750 hours per month for the first 12 months free on using t2.micro instances.
No, not everything can fit into RAM of a single computer.
But you can put a ~two terabytes of data into ram right now and pay $3000 a month for it ([link] ). Seems to me if you need it and you make money off it, it's not a bad option.
If that were hosted on amazon's content delivery network, that would cost around $6,720 :) Using this page as a reference: [link] I may have done the math wrong >.>
I was using UrbanAirship. They used to be free for up to 1,000,000 pushes / month, but then they decided that they had to do more than push notifications. They ended up charging an INSANE amount of money for what we needed (I think they should have kept their original offering and added to it instead of removing something that just worked).
As such I'm now using AWS's SNS service ([link]). It's been pretty fantastic so far. Quick to integrate, and best of all adds no dependencies to the apps themselves!
They're also 1,000,000 free pushes / month, then $0.50 for each mill thereafter.