in no way was this your fault.
Hell this shit happened at amazon before-
https://aws.amazon.com/message/680587/
Last I remember- guy is still there. Very similar situation.
This company didn't back up their databases? They suck at life.
Legal my ass- they failed to implement any best practice.
from the ToS:
>57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Stuff like Elastic Load Balancing is definitely a thing though. You don't have to buy a fuck ton of servers to support load spokes any more.
Like you said though, nothing is ever simple in software engineering. If they weren't already using something like AWS, it's not the easiest to move.
From the page I linked:
>Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without requiring manual intervention.
Amazon actually have a service called AWS Snowball which you can use to import massive amounts (upto petabyes) of data into AWS without having to upload it, by shipping it to them physically.
They ship 80TB ruggedised specialized NAS drives to your location, you plug them into your network with 10Gbps connectivity, upload your data, ship them back to Amazon and they put your data into your AWS S3 storage.
This is actually just reddit being mismanaged. They use Amazon AWS cloud for hosting; it should automatically be scaling the number of servers and load balancing on its own, depending on the traffic pressure.
In the price tier that Reddit is in (aka the major tech website price tier) Amazon even provides a dedicated team of specialists to keep the site up. The only plausible explanation is Reddit is managed completely incompetently and/or the software is written poorly.
I mean, Facebook and Twitter use cloud hosting and have way more traffic but don't get annihilated like Reddit does. There's literally no social media website out there that crashes and burns like this, aside from Reddit.
source: working on my own social media thing in the cloud, develop software for a living too
edit: just my opinion - it's not just unacceptable, it's just flatly ridiculous that a user needs to refresh 10-12 times to see any content.
Actual text from terms of service of AWS/Lumberyard: (emphasis mine)
57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
AWS rents out GPU based instances:
https://aws.amazon.com/ec2/Elastic-GPUs/
p2.16xlarge -- 16 GPUs in one instance. A SHA-1 computation farm is within anyone's reach, you don't have to be a government or even a large corporation.
Ik heb het gemaakt met Amazon Connect, een ding dat eigenlijk bedoeld is voor telefonische helpdesk-achtige frutsels ('heeft u een vraag over betalen, toets 4').
In feite heb ik gewoon op de plek waar normaal het "dit is de klantenservice van ..." hoort het Wilhelmus neergezet, et voila!
La verdad que funciona excelente, porque hace exactamente lo que has descripto. Personalmente me parece que lo más necesario sería unificar los artículos que siguen todos los usuarios en algún lado, para que el próximo que siga un producto seguido por otro, haga un call y ya reciba lo que el primero viene trackeando desde el principio. Estimo que será resolver donde hostear gratuitamente esa info (#idPublicacion #dateTime #precio). Voy a averiguar. EDIT: Lo más prometedor es Caspio, 000wehbost y el inevitable AWS en su plan free.
My post from yesterday about just such scenarios seems highly relevant right now:
What's interesting is that many of the even worse events seem to boil down to systemic issues that a single employee gets blamed (scapegoated?) for.
For example, data with no backups, that's an issue that was going to reveal itself sooner or later. Just so happened that they'll blame the employee for it rather than WannaCry. But the result is ultimately the same. They lacked the systems and policies to correctly protect key information.
Or this:
> an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
And while credit to Amazon, they didn't scapegoat anyone (that we know of), it just goes to show that a lot of "big deal" problems are systemic in nature. Just quietly waiting to be brought out in a big way.
This is why I think we could all learn a lot from the NTSB's investigations into aircraft crashes. When they look into these things they aren't looking for an individual or scapegoat, they boil the problem down to how the system put that individual into a position where they could screw up (be it poor procedures, poor training, poor equipment, or a million other issues).
Every time a colleague or subordinate makes a mistake, the first question that should get asked is: How could broader department policy have prevented or mitigated this?
AWS baby, thats the magic of it. If I remember correctly Netflix is the same way as its running purely in AWS last I heard.
Between route53, ELB, auto scaling, and health checks there is no real need for network gear in this enviroment. AWS pretty much manages all of the connectivity between all the services themselves in the regions. However this really isnt a surprised as its just a public website being hosted somewhere else.
For those who arent aware, there are Cisco virtual routers you can run if you have the need for it so dont be too dishearten
https://aws.amazon.com/marketplace/pp/B00EV8VWWM
and there is some network knowledge you need to have when working with VPN connections and direct connect
Holy crap, I was trying to figure out how to report it to Amazon...
Edit: Still an issue, but only for Postgresql
Edit2: I tweeted @awscloud letting them know.
> Q. Can I take Lumberyard and make my own game engine and distribute it?
>No. While you may maintain an internal version of Lumberyard that you have modified, you may not distribute that modified version in source code form, or as a freestanding game engine to third parties. You also may not use Lumberyard to distribute your own game engine, to make improvements to another game engine, or otherwise compete with Lumberyard or Amazon GameLift.
> Prohibited activities or content include:
> [...]
> Offensive Content. Content that is defamatory, obscene, abusive, invasive of privacy, or otherwise objectionable, including content that constitutes child pornography, relates to bestiality, or depicts non-consensual sex acts.
> If you become aware of any violation of this Policy, you will immediately notify us and provide us with assistance, as requested, to stop or remedy the violation. To report any violation of this Policy, please follow our abuse reporting process.
Derek, I'm gonna let you in on a secret. Every major publisher launches multiplayer games on AWS. EVERY single one. If it wasn't saving them money, they wouldn't do it! I can't mention exact ones because I legit AM under NDA and happen to like my job, but I oversee large system launches as part of my job. Did you notice how many sites went down with the S3 outage? More runs on AWS than half the AWS employees even know. Again, you are an idiot and don't know what you're talking about. But please show us how colocating servers that run idle for years since your "games" don't even make it to the bargain bin is more cost effective.
Edit: also wanted to point out that lots of indie small games have launched on AWS, along with a huge number of mobile games and game for all platforms. Do they all spend big like the big publishers? Of course not. They don't need to. But you better let them know how much cheaper colocating some "Dell Xeon" servers are! Save us from ourselves again Derek!
Edit 2: Derek! Look at all these industry idiots using AWS for games! This was years ago admittedly,but maybe one of us shitizens will let you borrow our time machine so you can go warn them. https://aws.amazon.com/gaming/reinvent-2014-slides/
Yes, I have. A few tips:
i'm suprised nobody yet mentioned algo. it let's you set up your own vpn gateway with many cloud providers. and amazon web services even offers an ec2 instance for a year free of charge.
algo self description:
"Today we’re introducing Algo, a self-hosted personal VPN server designed for ease of deployment and security. Algo automatically deploys an on-demand VPN service in the cloud that is not shared with other users, relies on only modern protocols and ciphers, and includes only the minimal software you need."
Each Snowmobile includes a network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections. Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.
https://aws.amazon.com/blogs/aws/aws-snowmobile-move-exabytes-of-data-to-the-cloud-in-weeks/
I'll honestly be surprised if Amazon can not. I mean they have an infrastructure in place for content delivery. I would be really worried if they are not able to do a 10/10 stream.
Netflix runs on Amazon's services to run their servers anyways. https://aws.amazon.com/solutions/case-studies/netflix/
EDIT: I know everybody uses AWS. Just pointing it out for people who don't know.
Sounds interesting!
But I'll warn you: Amazon is already using the term "Glacier" for "cold storage" of data: https://aws.amazon.com/glacier
If I were you, I'd strongly consider changing the name to avoid any trademark issues; legally you may very well be OK, but lawsuits are expensive even if you're in the right.
You do actually use Amazon products, you just don't know. https://aws.amazon.com/solutions/case-studies/all/
Netflix Workday AirBNB Belkin Citrix Coursera Duolingo FT IMDB King County (their website/services) Naughty Dog
You might not use their retail services, but that's not even their largest money maker any more.
Pay per usage is fine if the cost is reasonable. Consider what Amazon charges for IO. The most is $0.09 per GB for data transferred to the internet. I am perfectly willing to pay that or more + a flat fee for infrastructure maintenance for my home internet connection.
Just don't lock me to 300 GB, or something.
I wouldn't say they are trying to be a tech company. Amazon is by far the biggest player in cloud hosting and the fact that Re:Invent sold out so early compared to last year kinda proves how fast AWS is still growing. Netflix, League of Legends, Adobe, the MLB, and a bunch of other companies all use AWS in some capacity[Source].
I would be very comfortable in saying that Amazon is one of the biggest players in technology
Dropbox has been in a weird position for a long time. They are essentially entirely dependent on Amazon S3 as their storage backend, which means their storage costs are always going to be more than a competitor like Google or Amazon who don't have to pay a premium for storage.
Dropbox has managed to at least partially get off of Amazon for bandwidth by getting a Amazon DirectConnect connection and buying (some of their) bandwidth wholesale. And if they want to they could colocate servers in a datacenter near their Direct Connect connection and do all the server-side hashing work on their own systems. But for storage, which is probably their largest expense, they're kinda stuck.
But at the end of the day, they're not going to be able to compete with Amazon and Google on storage allowances without significantly restructuring their infrastructure at a large expense.
> Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
You can even ship a drive to AWS. Often the fastest way to get a few terabytes of data into/out of S3
The Java example is the very first Java example provided though. The problem is that there is no simple Java example that doesn't use the Flow Framework. So your very first Java example linked from the SWF tutorial listing is this ridiculously complex sample that unfortunately isn't great for learning SWF.
>The praising of the C# Tutorial is a non-sequitur as well, as there is no such thing, it's a blog post that's not linked to from the SWF documentation at all.
It's a blog post but it's linked right below the above discussed Java example in the listing of tutorials - "See the AWS SDK Team's blog on getting started with a sample Amazon SWF application using the AWS .NET SDK."
Their SLA policy is here:
https://aws.amazon.com/s3/sla/
TL;DR: you're entitled to a 10% S3 service credit for this billing cycle. To claim it, you need to submit a ticket to support with logs showing that you were impacted by the outage.
> Case in point, I am considering using the service for remote backups, but would want to retrieve the majority at once in case of need... Now I need to redo my sums ;)
You should consider the Infrequent Access Storage Option on S3. It's somewhere between S3 and Glacier:
The main good thing about Infrequent Access Storage is that it's not as complicated as Glacier and easier to calculate.
It'll take some time for them to write up what happened, but if they follow their usual procedures, I'd expect them to share an account of the outage.
These are the types of post-mortems they generally issue:
https://aws.amazon.com/message/680342/ https://aws.amazon.com/message/5467D2/ https://aws.amazon.com/message/65648/
Here's the real reason. There's no server hosts in SEA that Valve works with yet.
Explanation: I worked at a large company that used a lot of computing. It was actually cheaper to use cloud host services by Amazon, since they do a great job at it. Thus we didn't need to keep physical servers and can scale up or down with ease. So places that have Amazon servers usually have dota servers at their location, since it's pretty easy to set up.
https://aws.amazon.com/about-aws/global-infrastructure/
Philippines does not have a AWS (Amazon Web Services) location. Japan and Aussie does. Valve needs to start hunting for server hosts in Philippines.
To add on to this--Amazon offers a service known as Snowball, which is essentially a giant hard drive that's rugged enough to be shipped in the mail, used to upload several terabytes of data into the Amazon cloud.
They also offer what's known as a Snowmobile, which is a giant trailer truck with the capacity of 100PB.
AWS Import/Export Snowball >Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud.
FWIW, they are likely specifically referring to BigQuery. It's an append-only datastore that can really crunch data heavily.
Loading data into it is also fast because of the insane network speed from the compute instances.
AWS has Redshift, which is also stonking fast. They also have some really cool stuff like Data Pipeline which can do scheduled ETL (using your jobs) from any data service to any data service (e.g.: Hadoop to Elasticsearch, MySQL to Redshift, Oracle RDS to PostgreSQL RDS...)
All the cloud offerings are pretty cool, and taking a few weeks to really learn their capabilities is worthwhile.
AWS has recently been taking the 'throw everything at the wall' approach by offering seemingly every service possible, for me at my usage level this is perfect.
GCE takes the 'our offerings are flawless' approach. They don't offer as much (but are expanding), but their stuff is locked down tight. Also, if you need fast network (1gbps+) GCE cannot be beat in this aspect.
Lack of moderation? Not sure what the process is.
Edit: Admins/Devs if you're reading, you should check out AWS Rekognition - And the newish Image Moderation part of it, which picked up this picture had revealing clothes. At the very least it'd let you put things on a manual approval queue.
If you're doing this professionally and want to save tons of time, consider renting a server. Even AWS, as expensive as it is, it's pennies compared to your hourly rate. For instance, you can rent c4.8xlarge, which runs on 18 cores, 36 vCPU, 60GB RAM, and it's only $0.4131 per hour (spot pricing).
Het is jammer dat NPO geld hiervoor vraagt maar het is niet zo zeer de content waar ze geld om vragen maar de kosten voor de extra bandbreedte.
Feiten op een rijtje:
Stel een gemiddelde (internet) televisie kijker kijkt voor 25% naar NPO kanalen.
percentage NPO * dagen in maand * aantal uur per dag * GB per uur kijktijd in full-hd * kosten per GB
0.25 * 30 * (183 / 60) * 1.8 * (€0.018) = €0.74 per gebruiker per maand
Uiteraard is het overdreven dat de gemiddelde gebruiker zo veel via het internet naar NPO zou kijken. Daarom denk ik dat een bedrag van bijvoorbeeld €0.07 per gebruiker veel realistischer is. Maar zelfs dan is de bandbreedte niet gratis.
I did the math.
From the FAQ: Q. Is it okay for me to use my own servers? Yes. You can use hardware you own and operate for your game.
It's referring mostly to services similar to AWS. Incidentally, I see Polygon has already managed to put the most negative spin on this development they could possibly manage.
This office will primarily be for the Amazon Web Services (AWS) sales and marketing team. AWS is one of their fastest growing segments and they need a local presense for client meetings/presentations and for a place for the sales team to hang out.
I've had various conversations with AWS employees and they were barred from even having meetings in Illinois until this past January (or February?) since Amazon wasn't collecting sales tax. Basically Amazon was telling Illinois that they weren't conducting any business in the state, and this meant no face-to-face meetings. Now that Amazon collects sales tax in Illinois, they can have employees and conduct business meetings here.
I also expect an AWS Loft space like they have in San Francisco, New York, London, and Berlin to open up in the next 12-18 months.
I believe you can use something like this to move it to S3 storage or even glacier.
https://aws.amazon.com/snowball/
http://docs.aws.amazon.com/AWSImportExport/latest/DG/createGlacierimportjobs.html
PSA: This game engine limits you to either your own servers or Amazon's Web Services for multiplayer in your games.
And also, the terms of service of AWS/Lumberyard: (emphasis mine) are pretty funny:
> 57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Amazon DNS service.. Makes life nice that all your DNS is using anycast from the nearest Amazon datacenter.. We literally waste more time/money reviewing and forwarding the bills to AP, then we spend on the DNS service.
I don't pay by the GB but I do have various options for how many bits per second bandwidth I want to buy. They make it a flat rate for a given speed because customers like it that way, but they set the rate high enough so it's profitable. In fact, Time Warner's broadband profit margin is 97%.
Edit: But businesses serving content often pay by the GB. Netflix for example is hosted on Amazon's cloud, which charges about a nickel per GB for data transfer out. An hour of high-def is about 1.7 GB so every time you watch a one-hour episode of something it costs them about 8 cents.
Yes they do. However, it's applied on a per-billing cycle basis (i.e. monthly).
This outage would fall into the category of ">=99.0% but less than 99.9%" uptime for the cycle, which results in a 10% service credit on your S3 spend during that month.
Also, in order to receive the credit, you have to provide logs to their support team to prove you were affected!
Edit:
Edit2:
Edit3:
Amazon web services offers many datasets and you can spawn an instance with the dataset as a mounted volume. You'll still need to figure out how to work with it, but quite a decent selection to mess with.
Amazon still offers a $0.00/hour tier for t2.micro for a year.
There is now a $0.01/hour t2.~~micro~~nano instance for pfSense on AWS. That's $7.20/mo for VPN where you control both ends.
(We'd have done $0.00/hour, but we weren't (then) at the right partner level. We've cleared that hurdle, and paid the fees, but it hasn't wound through the AWS maze as yet.)
So you need to pay for storage space in S3, then the additional cost of data transfer.
250GB... US$6/mo. https://aws.amazon.com/s3/pricing/
Data transfer... 1GB/day or 30GB/mo.
S3 on its own, in Europe, 30x.09 = US$2.70.
Putting Cloudfront in front of S3, counterintuitively, actually looks to be cheaper. 30*.085=US$2.55.
So your ~US$130/mo for your current hosting is... not great.
EDIT: I misread. Corrected numbers.
There are whole services based around this idea of archival storage. For very long term storage it often surprises people to learn that tape is still the most cost-effective solution. (Albeit with very long delays which is fine for applications where you run a query and then expect the result in a few hours).
Since Amazon Lumberyard is based on the CryEngine, completely free, includes source and with Additional Features, they had to come up with something good (or it wouldn't make sense to allow Amazon to modify their CryEngine)
I wonder (and hope so) if it has something todo with their VR First Programm for universities... maybe a fancy VR editor? Design & code in VR would be very nice!
Fun Fact:
Amazon has a version of their cloud specifically for US government agencies that is certified and legal to host sensitive data on.
https://aws.amazon.com/de/govcloud-us/
>AWS GovCloud (US) is an isolated AWS region designed to host sensitive data and regulated workloads in the cloud, helping customers support their U.S. government compliance requirements, including the International Traffic in Arms Regulations (ITAR) and Federal Risk and Authorization Management Program (FedRAMP). AWS GovCloud (US) is operated solely by employees who are vetted U.S. Citizens on U.S. soil, and root account holders of AWS accounts must confirm they are U.S. Persons before being granted access credentials to the region.
>AWS GovCloud (US) is available to U.S. government agencies and organizations in government-regulated industries, that meet GovCloud (US) requirements for access.
Not if you pay for an Amazon snowball. Those babies can do like a Petabyte per week.
That is, assuming he's running his OWN PMS in AWS.
This may be way off base, depending on your needs, but have you considered a cloud computing platform, like Amazon EC2? They now offer GPU focused instances: https://aws.amazon.com/ec2/instance-types/
This would save you a lot of capital expense, and depending on how much compute time you need, might end up being much cheaper in the long run.
Set up a cross-account access role rather than using the root account credentials. The top-right corner of your console will indicate the name of the account (you choose an arbitrary name + color coding).
https://aws.amazon.com/blogs/aws/new-cross-account-access-in-the-aws-management-console/
It would have taken a few seconds of simple googling and if you had been observing industry news it was a big story at the time. Someone not giving you tons of links to prove it doesn't mean it isn't true and its also apparent you just wanted to be a naysayer without any researching yourself.
You can be skeptical or curious but if you're relying on others to do a simple google search try to avoid saying things are incorrect if you don't have any pre existing knowledge.
https://aws.amazon.com/federal/
Yelp, Netflix, NASA, CIA, FBI, FDA, FINRA, Healthcare.gov, Nokia, Comcast, Conde Nast (and reddit), Intuit, etc.
They're not all listed, some would prefer to fly under the radar (like the FBI/CIA and other DoJ agencies) but a significant number use Amazon Web Services.
While Spark may seem shiny, it's overkill for small-medium data science projects. Using it in standalone mode on your local computer to practice thinking in map-reduce isn't a bad idea, but may be hard to build a compelling project out of it.
Spark really is about large scale data. So I'd use it to explore large datasets on AWS. Insight has a post on how to do this - http://blog.insightdatalabs.com/spark-cluster-step-by-step/ - and I'd check out the AWS large datasets collection too - https://aws.amazon.com/public-datasets/
But if you're data is less than 20-30 gigabytes, Spark really is overkill. If anything, figuring out how to write efficient Python (or R, etc) code to analyze ~20 GB of data will force you to be a better engineer & data scientist (over using Spark to easily / quickly process 20 GB of data).
Indeed.
What's interesting is that many of the even worse events seem to boil down to systemic issues that a single employee gets blamed (scapegoated?) for.
For example, data with no backups, that's an issue that was going to reveal itself sooner or later. Just so happened that they'll blame the employee for it rather than WannaCry. But the result is ultimately the same. They lacked the systems and policies to correctly protect key information.
Or this:
> an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
And while credit to Amazon, they didn't scapegoat anyone (that we know of), it just goes to show that a lot of "big deal" problems are systemic in nature. Just quietly waiting to be brought out in a big way.
This is why I think we could all learn a lot from the NTSB's investigations into aircraft crashes. When they look into these things they aren't looking for an individual or scapegoat, they boil the problem down to how the system put that individual into a position where they could screw up (be it poor procedures, poor training, poor equipment, or a million other issues).
Every time a colleague or subordinate makes a mistake, the first question that should get asked is: How could broader department policy have prevented or mitigated this?
Which is, incidentally, what Amazon is likely doing about this. From their postmortem:
> While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.
Sounds like they're adding a --I-understand-that-this-will-destroy-a-whole-zone
flag.
AWS has a quick-start for VPC architecture that you can look at to see how they create a full stack with subnets, route tables, etc. ( https://aws.amazon.com/quickstart/architecture/vpc/). You might also look at the other quick start examples they have available. All of the quick starts have sample templates you can look at and see how they define the details and properties for each resource.
Terraform is (I think) one of the best alternatives to CloudFormation (https://www.terraform.io). You might look at that and see if it does what you need, however, that will then introduce something else to learn.
Hopefully this helps!
What a world when we can use a server (probably) in California to communicate with people all over the Valley to discuss a power outage we're currently experiencing.
I guess since phone lines are separate from power, this sort of thing has been theoretically possible as long as I've been alive, but it sure feels different.
You can do all of this troubleshooting and mucking around, or you can just look at the free cloudwatch metric: https://aws.amazon.com/blogs/aws/new-burst-balance-metric-for-ec2s-general-purpose-ssd-gp2-volumes/
Also, provisioned iops aren't the only solution. Increasing the size of the EBS volume (over 100 GB) increases your performance threshold.
Docker doesn't really come into play here, you would see the same issue without it. Whatever process he's running on that server is exhausting the available storage throughput.
As always, remember the USE method of troubleshooting. Utilization, Saturation, Errors: http://www.brendangregg.com/usemethod.html
Amazon has a good article on setting up bastion hosts. You may also want to look into auditd. https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/
I think you're thinking of Teleport which unfortunately has been neutered of all enterprise features in the OSS edition (LDAP, SSO, etc.).
> [...] In order to meet the HIPAA requirements applicable to our operating model, AWS aligns our HIPAA risk management program with FedRAMP and NIST 800-53, a higher security standard that maps to the HIPAA security rule. NIST supports this alignment and has issued SP 800-66, "An Introductory Resource Guide for Implementing the HIPAA Security Rule," which documents how NIST 800-53 aligns to the HIPAA Security rule.
Per https://aws.amazon.com/compliance/hipaa-compliance/
Hopefully that answers your question? I'd recommend giving them a call to ask about the details that might apply to your specific environment requirements.
Not true
https://aws.amazon.com/lumberyard/faq/
> Q. Do I have to run my game on AWS?
> No. If you own and operate your own private servers, you do not need to use AWS. You also don’t need to use AWS if your game does not use any servers. For example, if you release a free-standing single‐player or local-only multiplayer game, you pay us nothing
https://aws.amazon.com/lumberyard/
It's CryEngine, but the main reason is to sell cloud computing power on Amazon's network. If the next big game needs AWS to run the servers, Amazon will be coining it in.
Sign up for a Free EC2 instance
When your on it type
sudo apt install unzip
wget http://downloads.rclone.org/rclone-v1.30-linux-amd64.zip
unzip rclone-v1.30-linux-amd64.zip cd rclone-v1.30-linux-amd64
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
Then use by
rclone config
Follow on screen instructions to setup for Google Drive
Then
rclone copy $REMOTENAME:$REMOTEPATH ./
unzip $FILE
rm -rf $FILE rclone-v1.30-linux-amd64.zip
rclone copy ./* $REMOTENAME:/Extract/
I'm not sure what kind of cameras they use, but just to get a ballpark estimate lets assume it's 720p @ 30fps which often takes 17Mbps or ~6GB/h. At that rate a 1TB hard drive can store 167 hours of footage. If we assume the cameras are on for 8 hours per day, that means 1 TB is capable of storing 20 days worth of an officer's body cam footage. Now, cameras won't be running for a full 8 hour shift, and you have days off, so we'll call it approximately 1TB per month per officer.
Assuming that they keep the data for a year before erasing it, you'd need to allot 12TB per officer. Storage services like Amazon Glacier cost $0.004/GB per month or $4/TB per month. At 12TB per officer, that brings the total price to $48 per month or about $600 per year per officer.
This doesn't take into account any sort of compression, or the higher rate that sensitive data would probably go for, but should be an okay ballpark estimate. Minneapolis PD has ~800 officers, so that brings the total cost of raw data storage to ~$500,000 per year. Not exactly cheap, but even if I'm off by an order of magnitude, and you add in overhead costs it's still not even close to tens of millions.
For those wondering how to protect against this sort of breach, AWS Config Rules can help out tremendously. It allows you to specify rules matching certain conditions (EBS volumes with "public" bit set, for instance) and then run an action (alert, mitigation,etc.) in response to events that match the rules.
I have had a 25 year career in tech and now am very fortunate that I can pretty easily find work at that pay level. Having come through 2-3 brutal recessions I do not take it for granted and we save a ton.
The S in STEM is tough. If you want to accelerate into more money quickly I'd look to do a pivot into sales (pharma, biotech, healthcare focused tech, etc.). There are also loads of tech companies that focus on the sciences. I was just looking at one today to potentially partner with http://www.verato.com/. I do not know anything about them yet but they may be a good example of the type of company you could work for in order to accelerate earnings.
All big tech companies have Healthcare and Life Sciences vertical people.
Here are AWS and Microsoft's for example.
https://aws.amazon.com/health/
https://enterprise.microsoft.com/en-us/industries/health/
> Amazon S3 Standard and Standard - IA are designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.
https://aws.amazon.com/s3/faqs/
Note durability != availability.
I think it comes down to motivation, time and ownership.
Games have quite a few things against them:
They can be pretty complex which makes them costly in terms of time and effort.
Games do not solve issues as a tool you would use frequently
The people who enjoy making games are not necessarily the people who enjoy playing them. For instance I enjoyed a lot working on my game engine but didn't care much about a story. I was more attracted by technical challenges :)
Games relying on stories have a very limited life time from a player perspective (comparing to my text editor or email processing which I have been using for decades). So what happens after the game was finished? Why would people come back to it?
A tool is a tool. No one cares about the feelings as long as it fulfill a need. But games are also touching on the arts. And people start to think about "their" story, "their" artwork. And also why would they want to implement your vision when they can implement theirs?
All of this makes it difficult to end up with something polished and nice. That said there have always been quite a few tools and framework for open source games such as SDL, PyGame, Crystal Space, Ogre3d, JavaMonkeyEngine and more recently https://aws.amazon.com/lumberyard/
It is NOT an EEA gossip, but an interesting (imho) observation.
There's this Europe Money 2020 conference. On Wednesday there's "The role of open source software in blockchain". Casey Kuhlman will be representing Ethereum. This guy is the owner of Monax. They have listed companies that are using their (eth-based) technology. It's Amazon, Softjourn, Deloitte, PWC, Accenture, Swift, R3Members, EY, Microsoft Azure.
Speaking of Amazon, I've never heard of Eris Platform. That's interesting too.
No EEA shilling, no rumors, m'kay? I just find this interesting.
Yep. And what Derek fails to point out is that this move could help CIG and backers in a number of ways:
I'm told that Lumberyard has FULL VR support already built in. https://aws.amazon.com/blogs/gamedev/build-for-any-vr-device-with-lumberyard-beta-1-3/ . Iirc the CryEngine version that CIG were using wasn't fully supporting it, thus would require a lot of work by CIG.
As "Gamelift" and "Gridmate" (Amazons instancing / session engines) allow for on-demand instancing, that could possibly help CIG to implement "private servers". Which was one of the original pledge goals.
As the engine is provided free, that helps people to create mods. Which is another original pledge goal.
>I am still going to write a blog. Considering my knowledge of LumberYard, wait.
There seems to be something missing after "wait"
"wait until I can quickly read some 'What is Lumberyard' starter guide so I can pretend that I actually have a clue." perhaps?
Here Derek, I'll help you get started with your 'knowledge': Lumberyard Details
I don't think it'll be an issue if the private server is running on your own PC.
>Q. Do I have to run my game on AWS? No. If you own and operate your own private servers, you do not need to use AWS.
One of my biggest worries with any solution in this space is where to store the data and how to pay for the bandwidth consumed. Quick back of the napkin math indicates that the 40,000 downloads that SF2.5 is about ~12.6 Terrabytes of total bandwidth, just over ~1100 dollars from Amazon S3.
Thats before you consider wanting to give money back to mod and modpack developers for their efforts. Each download costing ~2.75 cents. Someone with better experience with AddFly might be able to comment on whether this math would work out favorably or not.
Source on the planned Linux support.
> Q. What device platforms does Lumberyard support?
Lumberyard currently supports PC, Xbox One, and PlayStation 4. Mobile support for iOS and Android devices is coming soon, along with additional support for Mac and Linux. Note that Sony and Microsoft only permit developers who have passed their screening process to develop games for their platforms.
For S3 storage in AWS, you need to add the cost of storage with the cost of transferring 2,000 GB (1GB * 2000) people.
Looking at their pricing, you would have:
Or a total of ~$180.
You can find a calculator here, (Click the S3 tab on the left to get the S3 calculator).
This isn't just Amazon.com. this is AWS (Amazon Web Services), the largest cloud server provider. Thousands of huge corporations are using their servers and network: https://aws.amazon.com/solutions/case-studies/ OP is suggesting they are spying on other AWS customers and their traffic
I know the government has used some amazon web service servers, so there is likely a data storage policy that amazon has agreed with to i.e. location/backups/security/list of allowed personnel/auditing etc for the given machines. It would likely be an IT project done under the DOJ or it's own seperate government contract. https://aws.amazon.com/govcloud-us/ has a good explanation of how its handled at the amazon side.
Nvidia's chips are being used in a variety of systems.
Notably, Nintendo's Switch is powered by a Tegra SOC from Nvidia. They could get a nice boost if the Switch takes off.
Nvidia is also a player in the autonomous vehicle market which is a market primed to explode in the next 2-5 years.
GPUs are also increasingly being incorporated into super computers and powering machine learning/AI services. For example, Nvidia chips are powering AWS P2 GPU compute instance types.
I think Nvidia will be fine.
Could they be talking about this?
https://aws.amazon.com/govcloud-us/
If they had to build a super-secret cloud service for the CIA, then it makes sense that they would take what they learned and build a quasi-public cloud for less sensitive (but still US-only) information....
Why not sign up for a free 1 year AWS Free Tier account. Spin up your own linux flavored instance from the comforts of your home. Then SSH into instance and then run your code. Here's the tutorial on how to spin up a linux instance.
If there's anything else you think we can do better to be more proactive with our security measures, please hit us up at [email protected]
(PGP key).
You could use an EC2 instance from AWS (Or Azure or GCP - depending on what floats your boat). Turn it on when you need it and shut it off when you don't. I use this for gaming these days. I have a top of the line Azure GPU instance I remote in and has my games and shit (via steam). I only game a few hours a week so it's not very expensive.
Don't really know specifically what kind of processing power you need, but a 4 vcpu 16gb memory EC2 instance w/ windows is ~90$ a month @ 40 hours a week. If you only need it intermittently you can save a lot with a spot instance. GCP also has pre-emptible and the like so they may be cheaper than AWS too.
You might also checkout https://aws.amazon.com/workspaces/ - I've never used it though so i can't comment on the performance.
Pros: Persistent machine you don't have to manage that will always be available assuming you have internet. Cons: No internet or poor internet is very obvious. Can make working a pain.
Anyway hope that helps.
No, it is totally fine to calculate that way. Because now you can actually order spot instances that won't terminate for up to 6 hours from AWS. See https://aws.amazon.com/de/blogs/aws/new-ec2-spot-blocks-for-defined-duration-workloads/
HIPAA, unlike PCI-DSS, is entirely focused on the software side of things; so the onus is on the software developer to implement the necessary encryption, access controls, and access reporting that is required to meet the HIPAA standards.
Even Amazon's page about HIPAA basically says "Uhm... Yeah... we aren't HIPAA complaint because we have no requirements... but if you want to say we are... cool. Just sign this paper and Bob's your Uncle."
Serverless option: 1) API Gateway + Lambda + DynamoDb. 2) https://aws.amazon.com/api-gateway/pricing/ + https://aws.amazon.com/lambda/pricing/ + https://aws.amazon.com/dynamodb/pricing/. 3) Cost for API Gateway per invocation + bandwidth. Lambda is per invokation + duration your logic runs. DynamoDB is per hour based on how much read/write request capacity you need. 4) Secure from man in the middle, yes.
It's worth noting that the source is provided, but it's not open source:
https://aws.amazon.com/lumberyard/faq/#Licensing
> Q. Is Lumberyard “open source”?
> No. We make the source code available to enable you to fully customize your game, but your rights are limited by the Lumberyard Service Terms. For example, you may not publicly release the Lumberyard engine source code, or use it to release your own game engine.
Just pointing this out since it appears many others were confused by the description of Amazon Lumberyard as "free, including full source code". What Lumberyard provides is not dissimilar from what other game engines also offer.
AWS is Amazon Web Services, a massive suite of cloud computing services. basically when you send an iMessage it doesn't just zip directly from your device to the recipient's device. it gets sent to a server somewhere convenient (as others have stated, sometimes it's an AWS server, sometimes it's a Microsoft Azure server, sometimes it's an Apple owned server), it's destination is processed, and then it gets sent out to the recipient. sometimes if the recipient isn't immediately available, the message/image will sit there for a bit. apple deploys encryption techniques on the sender and recipient sides, so there server never really knows what it's receiving/sending, just that a certain set of bits is going from one place to another and sometimes gets stored for a little longer.
Anticipatory of a future where virtualized computing is the norm cause high speed internet is dirt cheap and commoditized.
Edit: I'd imagine it's something like https://aws.amazon.com/appstream/ on steroids. Probably also offers some advantage to the underlying logistics involved in virtual colocation.
Monthly storage costs aside, as ctolsen said, Amazon recently started offering AWS Snowball to solve the upload bandwidth issue (at ~$200/job). Seems appropriate for your use case of a multi-terabyte import.
As for storage cost with Glacier, 16TB seems to be around $2000/year. When you consider an on-premise solution, make sure you're taking daily/weekly maintenance time and physical disk/tape storage costs into account. Your concern in this thread seems mainly targeted at long-term data durability. Perhaps it's worth the slightly higher OpEx with a managed solution?
Read and fill out the form linked at https://aws.amazon.com/blogs/aws/reverse-dns-for-ec2s-elastic-ip-addresses/
Setting up reverse DNS also unblocks SMTP as they add your IP to spam whitelists too I did this recently. They are pretty quick with it but the whitelists can take a few days to be updated.
I call bullshit.
> You can retrieve up to 5% of your average monthly storage (pro-rated daily) for free each month. If you choose to retrieve more than this amount of data in a month, you are charged a retrieval fee starting at $0.01 per gigabyte. Learn more.
Extremely deceiving. They state starting at $0.01. That's bloody cheap. Then they throw in this innocuous-looking "learn more" link that provides such convoluted and incomprehensible formulas that there is nobody who can figure out how much it will actually cost.
The problem is that all of AWS's services are fairly easy to estimate costs for... Glacier is anything but.
You will need some sort of email connected to the domain, regardless. Try using SES inbound on the domain. I set it up recently to deliver the mails to an s3 bucket, and felt like it was very straightforward. https://aws.amazon.com/about-aws/whats-new/2015/09/amazon-ses-now-supports-inbound-email/
Nice work there, probably more accurate than my own calculations. Fair point about the ping time too - I multiplied by two for the round-trip time for a packet of data.
AWS Organisations is due "soon" which might help/change things. Check the FAQ for details (as much as there are right now).
If you have enough data, it is cost effective to let them handle the shipping & transfer, also.
Not quite as cost effective as tape (at high enough volume to offset cost of the tape drives), and would be expensive to restore all at once, but a heck of a load off from a management perspective. I hope never to see a tape again.