in no way was this your fault.
Hell this shit happened at amazon before-
https://aws.amazon.com/message/680587/
Last I remember- guy is still there. Very similar situation.
This company didn't back up their databases? They suck at life.
Legal my ass- they failed to implement any best practice.
from the ToS:
>57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Stuff like Elastic Load Balancing is definitely a thing though. You don't have to buy a fuck ton of servers to support load spokes any more.
Like you said though, nothing is ever simple in software engineering. If they weren't already using something like AWS, it's not the easiest to move.
From the page I linked:
>Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without requiring manual intervention.
Amazon actually have a service called AWS Snowball which you can use to import massive amounts (upto petabyes) of data into AWS without having to upload it, by shipping it to them physically.
They ship 80TB ruggedised specialized NAS drives to your location, you plug them into your network with 10Gbps connectivity, upload your data, ship them back to Amazon and they put your data into your AWS S3 storage.
This is actually just reddit being mismanaged. They use Amazon AWS cloud for hosting; it should automatically be scaling the number of servers and load balancing on its own, depending on the traffic pressure.
In the price tier that Reddit is in (aka the major tech website price tier) Amazon even provides a dedicated team of specialists to keep the site up. The only plausible explanation is Reddit is managed completely incompetently and/or the software is written poorly.
I mean, Facebook and Twitter use cloud hosting and have way more traffic but don't get annihilated like Reddit does. There's literally no social media website out there that crashes and burns like this, aside from Reddit.
source: working on my own social media thing in the cloud, develop software for a living too
edit: just my opinion - it's not just unacceptable, it's just flatly ridiculous that a user needs to refresh 10-12 times to see any content.
Also, as in AWS developer, if anyone is interested in doing anything similar, it uses services that are open to the public:
For speech recognition it uses transcribe: https://aws.amazon.com/transcribe/
For detecting the emotion of something it uses comprehend: https://aws.amazon.com/comprehend/
Actual text from terms of service of AWS/Lumberyard: (emphasis mine)
57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
AWS rents out GPU based instances:
https://aws.amazon.com/ec2/Elastic-GPUs/
p2.16xlarge -- 16 GPUs in one instance. A SHA-1 computation farm is within anyone's reach, you don't have to be a government or even a large corporation.
Ik heb het gemaakt met Amazon Connect, een ding dat eigenlijk bedoeld is voor telefonische helpdesk-achtige frutsels ('heeft u een vraag over betalen, toets 4').
In feite heb ik gewoon op de plek waar normaal het "dit is de klantenservice van ..." hoort het Wilhelmus neergezet, et voila!
La verdad que funciona excelente, porque hace exactamente lo que has descripto. Personalmente me parece que lo más necesario sería unificar los artículos que siguen todos los usuarios en algún lado, para que el próximo que siga un producto seguido por otro, haga un call y ya reciba lo que el primero viene trackeando desde el principio. Estimo que será resolver donde hostear gratuitamente esa info (#idPublicacion #dateTime #precio). Voy a averiguar. EDIT: Lo más prometedor es Caspio, 000wehbost y el inevitable AWS en su plan free.
https://aws.amazon.com/about-aws/global-infrastructure/
Blizzard uses Amazon servers, and they simply don't have one in Africa at the moment. It's not up to Blizzard to add some, it's on Amazon. Looks like they're getting a server in Bahrain soon, maybe that will help.
My post from yesterday about just such scenarios seems highly relevant right now:
What's interesting is that many of the even worse events seem to boil down to systemic issues that a single employee gets blamed (scapegoated?) for.
For example, data with no backups, that's an issue that was going to reveal itself sooner or later. Just so happened that they'll blame the employee for it rather than WannaCry. But the result is ultimately the same. They lacked the systems and policies to correctly protect key information.
Or this:
> an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
And while credit to Amazon, they didn't scapegoat anyone (that we know of), it just goes to show that a lot of "big deal" problems are systemic in nature. Just quietly waiting to be brought out in a big way.
This is why I think we could all learn a lot from the NTSB's investigations into aircraft crashes. When they look into these things they aren't looking for an individual or scapegoat, they boil the problem down to how the system put that individual into a position where they could screw up (be it poor procedures, poor training, poor equipment, or a million other issues).
Every time a colleague or subordinate makes a mistake, the first question that should get asked is: How could broader department policy have prevented or mitigated this?
Reddit runs on AWS, and the AWS acceptable use policy forbids various types of content including content that may be harmful to Amazon's reputation.
So the real question is: why is Amazon Web Services hosting hate speech, promoting white nationalism, and enabling radical right wing terror and murder?
AWS baby, thats the magic of it. If I remember correctly Netflix is the same way as its running purely in AWS last I heard.
Between route53, ELB, auto scaling, and health checks there is no real need for network gear in this enviroment. AWS pretty much manages all of the connectivity between all the services themselves in the regions. However this really isnt a surprised as its just a public website being hosted somewhere else.
For those who arent aware, there are Cisco virtual routers you can run if you have the need for it so dont be too dishearten
https://aws.amazon.com/marketplace/pp/B00EV8VWWM
and there is some network knowledge you need to have when working with VPN connections and direct connect
Google: https://cloud.google.com/bigtable/ Facebook: http://hive.apache.org/ Amazon: https://aws.amazon.com/dynamodb/
The Facebook version has available source. The Google and Amazon versions have a number of whitepapers discussing how it works. These could be used to independently build your own version, but since both offer it as a service, that's generally more convenient.
Holy crap, I was trying to figure out how to report it to Amazon...
Edit: Still an issue, but only for Postgresql
Edit2: I tweeted @awscloud letting them know.
> Q. Can I take Lumberyard and make my own game engine and distribute it?
>No. While you may maintain an internal version of Lumberyard that you have modified, you may not distribute that modified version in source code form, or as a freestanding game engine to third parties. You also may not use Lumberyard to distribute your own game engine, to make improvements to another game engine, or otherwise compete with Lumberyard or Amazon GameLift.
> Prohibited activities or content include:
> [...]
> Offensive Content. Content that is defamatory, obscene, abusive, invasive of privacy, or otherwise objectionable, including content that constitutes child pornography, relates to bestiality, or depicts non-consensual sex acts.
> If you become aware of any violation of this Policy, you will immediately notify us and provide us with assistance, as requested, to stop or remedy the violation. To report any violation of this Policy, please follow our abuse reporting process.
Derek, I'm gonna let you in on a secret. Every major publisher launches multiplayer games on AWS. EVERY single one. If it wasn't saving them money, they wouldn't do it! I can't mention exact ones because I legit AM under NDA and happen to like my job, but I oversee large system launches as part of my job. Did you notice how many sites went down with the S3 outage? More runs on AWS than half the AWS employees even know. Again, you are an idiot and don't know what you're talking about. But please show us how colocating servers that run idle for years since your "games" don't even make it to the bargain bin is more cost effective.
Edit: also wanted to point out that lots of indie small games have launched on AWS, along with a huge number of mobile games and game for all platforms. Do they all spend big like the big publishers? Of course not. They don't need to. But you better let them know how much cheaper colocating some "Dell Xeon" servers are! Save us from ourselves again Derek!
Edit 2: Derek! Look at all these industry idiots using AWS for games! This was years ago admittedly,but maybe one of us shitizens will let you borrow our time machine so you can go warn them. https://aws.amazon.com/gaming/reinvent-2014-slides/
Yes, I have. A few tips:
i'm suprised nobody yet mentioned algo. it let's you set up your own vpn gateway with many cloud providers. and amazon web services even offers an ec2 instance for a year free of charge.
algo self description:
"Today we’re introducing Algo, a self-hosted personal VPN server designed for ease of deployment and security. Algo automatically deploys an on-demand VPN service in the cloud that is not shared with other users, relies on only modern protocols and ciphers, and includes only the minimal software you need."
Each Snowmobile includes a network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections. Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.
https://aws.amazon.com/blogs/aws/aws-snowmobile-move-exabytes-of-data-to-the-cloud-in-weeks/
I'll honestly be surprised if Amazon can not. I mean they have an infrastructure in place for content delivery. I would be really worried if they are not able to do a 10/10 stream.
Netflix runs on Amazon's services to run their servers anyways. https://aws.amazon.com/solutions/case-studies/netflix/
EDIT: I know everybody uses AWS. Just pointing it out for people who don't know.
The first thing you need to do is make sure you've turned off the service completely. AWS won't offer a refund for anything that's still running.
Next, contact support and tell them what happened. They're the only ones who can sort this out.
Check out Macie's pricing here, they have an example of what real world usage would cost: https://aws.amazon.com/macie/pricing/
I think OW uses Amazon Web Services for their servers. There is no infrastructure in Africa.
https://aws.amazon.com/about-aws/global-infrastructure/
It's up to Amazon to develop servers in Africa in order for Blizzard to use them.
Sounds interesting!
But I'll warn you: Amazon is already using the term "Glacier" for "cold storage" of data: https://aws.amazon.com/glacier
If I were you, I'd strongly consider changing the name to avoid any trademark issues; legally you may very well be OK, but lawsuits are expensive even if you're in the right.
You do actually use Amazon products, you just don't know. https://aws.amazon.com/solutions/case-studies/all/
Netflix Workday AirBNB Belkin Citrix Coursera Duolingo FT IMDB King County (their website/services) Naughty Dog
You might not use their retail services, but that's not even their largest money maker any more.
Pay per usage is fine if the cost is reasonable. Consider what Amazon charges for IO. The most is $0.09 per GB for data transferred to the internet. I am perfectly willing to pay that or more + a flat fee for infrastructure maintenance for my home internet connection.
Just don't lock me to 300 GB, or something.
I wouldn't say they are trying to be a tech company. Amazon is by far the biggest player in cloud hosting and the fact that Re:Invent sold out so early compared to last year kinda proves how fast AWS is still growing. Netflix, League of Legends, Adobe, the MLB, and a bunch of other companies all use AWS in some capacity[Source].
I would be very comfortable in saying that Amazon is one of the biggest players in technology
Dropbox has been in a weird position for a long time. They are essentially entirely dependent on Amazon S3 as their storage backend, which means their storage costs are always going to be more than a competitor like Google or Amazon who don't have to pay a premium for storage.
Dropbox has managed to at least partially get off of Amazon for bandwidth by getting a Amazon DirectConnect connection and buying (some of their) bandwidth wholesale. And if they want to they could colocate servers in a datacenter near their Direct Connect connection and do all the server-side hashing work on their own systems. But for storage, which is probably their largest expense, they're kinda stuck.
But at the end of the day, they're not going to be able to compete with Amazon and Google on storage allowances without significantly restructuring their infrastructure at a large expense.
> Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
You can even ship a drive to AWS. Often the fastest way to get a few terabytes of data into/out of S3
The Java example is the very first Java example provided though. The problem is that there is no simple Java example that doesn't use the Flow Framework. So your very first Java example linked from the SWF tutorial listing is this ridiculously complex sample that unfortunately isn't great for learning SWF.
>The praising of the C# Tutorial is a non-sequitur as well, as there is no such thing, it's a blog post that's not linked to from the SWF documentation at all.
It's a blog post but it's linked right below the above discussed Java example in the listing of tutorials - "See the AWS SDK Team's blog on getting started with a sample Amazon SWF application using the AWS .NET SDK."
Their SLA policy is here:
https://aws.amazon.com/s3/sla/
TL;DR: you're entitled to a 10% S3 service credit for this billing cycle. To claim it, you need to submit a ticket to support with logs showing that you were impacted by the outage.
> Case in point, I am considering using the service for remote backups, but would want to retrieve the majority at once in case of need... Now I need to redo my sums ;)
You should consider the Infrequent Access Storage Option on S3. It's somewhere between S3 and Glacier:
The main good thing about Infrequent Access Storage is that it's not as complicated as Glacier and easier to calculate.
Reddit runs on AWS, and the AWS acceptable use policy forbids various types of content including content that may be harmful to Amazon's reputation.
So the real question is: why is Amazon Web Services hosting hate speech, promoting white nationalism, and enabling radical right wing terror and murder?
It'll take some time for them to write up what happened, but if they follow their usual procedures, I'd expect them to share an account of the outage.
These are the types of post-mortems they generally issue:
https://aws.amazon.com/message/680342/ https://aws.amazon.com/message/5467D2/ https://aws.amazon.com/message/65648/
Here's the real reason. There's no server hosts in SEA that Valve works with yet.
Explanation: I worked at a large company that used a lot of computing. It was actually cheaper to use cloud host services by Amazon, since they do a great job at it. Thus we didn't need to keep physical servers and can scale up or down with ease. So places that have Amazon servers usually have dota servers at their location, since it's pretty easy to set up.
https://aws.amazon.com/about-aws/global-infrastructure/
Philippines does not have a AWS (Amazon Web Services) location. Japan and Aussie does. Valve needs to start hunting for server hosts in Philippines.
To add on to this--Amazon offers a service known as Snowball, which is essentially a giant hard drive that's rugged enough to be shipped in the mail, used to upload several terabytes of data into the Amazon cloud.
They also offer what's known as a Snowmobile, which is a giant trailer truck with the capacity of 100PB.
AWS Import/Export Snowball >Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud.
FWIW, they are likely specifically referring to BigQuery. It's an append-only datastore that can really crunch data heavily.
Loading data into it is also fast because of the insane network speed from the compute instances.
AWS has Redshift, which is also stonking fast. They also have some really cool stuff like Data Pipeline which can do scheduled ETL (using your jobs) from any data service to any data service (e.g.: Hadoop to Elasticsearch, MySQL to Redshift, Oracle RDS to PostgreSQL RDS...)
All the cloud offerings are pretty cool, and taking a few weeks to really learn their capabilities is worthwhile.
AWS has recently been taking the 'throw everything at the wall' approach by offering seemingly every service possible, for me at my usage level this is perfect.
GCE takes the 'our offerings are flawless' approach. They don't offer as much (but are expanding), but their stuff is locked down tight. Also, if you need fast network (1gbps+) GCE cannot be beat in this aspect.
I recommend using Amazon EC2 instances and the AWS Cloud 9 IDE.
Basically what you can do is teach your students to create AWS EC2 instances, or just create them on your own, and then use those as the computers. The Cloud 9 IDE has a shell session into the instances so they'll not only have a text editor but full bash shell access (for things like compiling C, getting familiar with *NIX commands, etc).
I've set up a number of coding workshops using this stack so please feel free to DM if you want more details or some help with EC2.
The nice thing about using EC2 instances is that you only pay for them when they're running - during class time or while students are practicing/working. All other times they can stop the instance, leaving them and their disks/data around, but not pay for it. Depending on your specific experience and needs you may be able to just use free-tier instances and not pay for anything.
Ninja-edit: keep in mind this requires internet access to use at all, so maybe not perfect if students don't have the ability to access the internet while they're coding. Should be more than fine on library computers with a reasonably modern browser, though. Also, since it'll all be fully browser based library computer IT shouldn't be an issue.
Lack of moderation? Not sure what the process is.
Edit: Admins/Devs if you're reading, you should check out AWS Rekognition - And the newish Image Moderation part of it, which picked up this picture had revealing clothes. At the very least it'd let you put things on a manual approval queue.
If you're doing this professionally and want to save tons of time, consider renting a server. Even AWS, as expensive as it is, it's pennies compared to your hourly rate. For instance, you can rent c4.8xlarge, which runs on 18 cores, 36 vCPU, 60GB RAM, and it's only $0.4131 per hour (spot pricing).
> nice is PostgreSQL had a way of auto-tuning these values based on actual measured performance at runtime
Such projects exist.
Well AWS is the largest cloud provider and having hundreds of extra servers available shouldn't be a big deal for them, considering Auto Scaling is one of the core features of any cloud provider these days.
I'm just surprised AWS is having trouble with their demand for servers.
Het is jammer dat NPO geld hiervoor vraagt maar het is niet zo zeer de content waar ze geld om vragen maar de kosten voor de extra bandbreedte.
Feiten op een rijtje:
Stel een gemiddelde (internet) televisie kijker kijkt voor 25% naar NPO kanalen.
percentage NPO * dagen in maand * aantal uur per dag * GB per uur kijktijd in full-hd * kosten per GB
0.25 * 30 * (183 / 60) * 1.8 * (€0.018) = €0.74 per gebruiker per maand
Uiteraard is het overdreven dat de gemiddelde gebruiker zo veel via het internet naar NPO zou kijken. Daarom denk ik dat een bedrag van bijvoorbeeld €0.07 per gebruiker veel realistischer is. Maar zelfs dan is de bandbreedte niet gratis.
I did the math.
From the FAQ: Q. Is it okay for me to use my own servers? Yes. You can use hardware you own and operate for your game.
It's referring mostly to services similar to AWS. Incidentally, I see Polygon has already managed to put the most negative spin on this development they could possibly manage.
This office will primarily be for the Amazon Web Services (AWS) sales and marketing team. AWS is one of their fastest growing segments and they need a local presense for client meetings/presentations and for a place for the sales team to hang out.
I've had various conversations with AWS employees and they were barred from even having meetings in Illinois until this past January (or February?) since Amazon wasn't collecting sales tax. Basically Amazon was telling Illinois that they weren't conducting any business in the state, and this meant no face-to-face meetings. Now that Amazon collects sales tax in Illinois, they can have employees and conduct business meetings here.
I also expect an AWS Loft space like they have in San Francisco, New York, London, and Berlin to open up in the next 12-18 months.
I believe you can use something like this to move it to S3 storage or even glacier.
https://aws.amazon.com/snowball/
http://docs.aws.amazon.com/AWSImportExport/latest/DG/createGlacierimportjobs.html
PSA: This game engine limits you to either your own servers or Amazon's Web Services for multiplayer in your games.
And also, the terms of service of AWS/Lumberyard: (emphasis mine) are pretty funny:
> 57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Amazon DNS service.. Makes life nice that all your DNS is using anycast from the nearest Amazon datacenter.. We literally waste more time/money reviewing and forwarding the bills to AP, then we spend on the DNS service.
I don't pay by the GB but I do have various options for how many bits per second bandwidth I want to buy. They make it a flat rate for a given speed because customers like it that way, but they set the rate high enough so it's profitable. In fact, Time Warner's broadband profit margin is 97%.
Edit: But businesses serving content often pay by the GB. Netflix for example is hosted on Amazon's cloud, which charges about a nickel per GB for data transfer out. An hour of high-def is about 1.7 GB so every time you watch a one-hour episode of something it costs them about 8 cents.
Amazon has two regions for the US government. Govcloud is the generic one, and they recently announced this: https://aws.amazon.com/blogs/publicsector/announcing-the-new-aws-secret-region/ for top secret stuff.
S3 also allows encryption at rest.
They also provide tools that automatically check S3 buckets for misconfigured access and alert on it. Before they provided the tool directly you could easily automate your own and various security scanners like Nessus would alert on public buckets too. This company just didn't follow any proper security procedures.
Yes they do. However, it's applied on a per-billing cycle basis (i.e. monthly).
This outage would fall into the category of ">=99.0% but less than 99.9%" uptime for the cycle, which results in a 10% service credit on your S3 spend during that month.
Also, in order to receive the credit, you have to provide logs to their support team to prove you were affected!
Edit:
Edit2:
Edit3:
Amazon web services offers many datasets and you can spawn an instance with the dataset as a mounted volume. You'll still need to figure out how to work with it, but quite a decent selection to mess with.
Reddit runs on AWS, and the AWS acceptable use policy forbids various types of content including content that may be harmful to Amazon's reputation.
So the real question is: why is Amazon Web Services hosting hate speech, promoting white nationalism, and enabling radical right wing terror and murder?
Amazon still offers a $0.00/hour tier for t2.micro for a year.
There is now a $0.01/hour t2.~~micro~~nano instance for pfSense on AWS. That's $7.20/mo for VPN where you control both ends.
(We'd have done $0.00/hour, but we weren't (then) at the right partner level. We've cleared that hurdle, and paid the fees, but it hasn't wound through the AWS maze as yet.)
Setting aside how stupid this proposal is, I think it's a bigger concern to worry about the false premise that it is based on. Namely, that the city can provide the infrastructure and staff to monitor such a system effectively. Not only is it incredibly onerous to maintain and store the amount of data generated by these cameras, but there is another concern that staff will simply not be effective at monitoring them. Is NOPD going to be able to respond in real time? Track record suggest no. How many people are going to need to be hired to comb through crime footage after a report has been made? What about bandwidth costs? What about the cost of installing Gigabit connections to each camera?
Using a quick google calculation to estimate storage requirements for 1700 cameras that stream 24hrs/day, 7 days a week and store data for 30 days, and relying on AWS storage estimates, the MONTHLY cost of storage will be $37,800 ($500,000/yr). This is assuming there are no other fees associated with data storage.
Keep in mind, this doesn't include the staffing budget, the maintenance fees, or the equipment fees for the cabling, switches, routers, and internet services. Nor does this include software licensing fees for the surveillance system.
> I disagree - Google owns a tremendous amount of internet infrastructure, including data centres and wiring (especially if Google Fibre pans out).
It may come to you as a surprise, but so does Amazon. If Amazon decides to turn off AWS, prepare to be bored. Bye Reddit, it runs on Amazon's cloud platform. So do Netflix, Tinder, SoundCloud, Spotify and Slack, AirBnB, LinkedIn to name a few.
It's estimated up to 70% of the global internet traffic goes through Amazon's datacenters in Northern Virginia alone.
There's more to Amazon than the shopping website. Much, much more.
Great question! You can use SAM to build your Serverless applications locally. See: https://github.com/awslabs/aws-sam-local/ SAM helps you generate your Lambda deployment packages and export them. Also check out our Code Star service for automated deployments: https://aws.amazon.com/codestar/ -George
So you need to pay for storage space in S3, then the additional cost of data transfer.
250GB... US$6/mo. https://aws.amazon.com/s3/pricing/
Data transfer... 1GB/day or 30GB/mo.
S3 on its own, in Europe, 30x.09 = US$2.70.
Putting Cloudfront in front of S3, counterintuitively, actually looks to be cheaper. 30*.085=US$2.55.
So your ~US$130/mo for your current hosting is... not great.
EDIT: I misread. Corrected numbers.
There are whole services based around this idea of archival storage. For very long term storage it often surprises people to learn that tape is still the most cost-effective solution. (Albeit with very long delays which is fine for applications where you run a query and then expect the result in a few hours).
Since Amazon Lumberyard is based on the CryEngine, completely free, includes source and with Additional Features, they had to come up with something good (or it wouldn't make sense to allow Amazon to modify their CryEngine)
I wonder (and hope so) if it has something todo with their VR First Programm for universities... maybe a fancy VR editor? Design & code in VR would be very nice!
From Amazon RDS Supports Stopping and Starting of Database Instances:
> You can stop an instance for up to 7 days at a time. After 7 days, it will be automatically started.
L&T might not even have to develop these technologies in house. There are open source solutions. Plus I believe both Amazon and Google have projects providing object recognition on their cloud platform. Plus they can always license it from a small company which specializes in these tech.
https://cloud.google.com/vision/
https://aws.amazon.com/rekognition/
They do not seem to offer facial recognition directly though probably because of legal reasons.
Fun Fact:
Amazon has a version of their cloud specifically for US government agencies that is certified and legal to host sensitive data on.
https://aws.amazon.com/de/govcloud-us/
>AWS GovCloud (US) is an isolated AWS region designed to host sensitive data and regulated workloads in the cloud, helping customers support their U.S. government compliance requirements, including the International Traffic in Arms Regulations (ITAR) and Federal Risk and Authorization Management Program (FedRAMP). AWS GovCloud (US) is operated solely by employees who are vetted U.S. Citizens on U.S. soil, and root account holders of AWS accounts must confirm they are U.S. Persons before being granted access credentials to the region.
>AWS GovCloud (US) is available to U.S. government agencies and organizations in government-regulated industries, that meet GovCloud (US) requirements for access.
Not if you pay for an Amazon snowball. Those babies can do like a Petabyte per week.
That is, assuming he's running his OWN PMS in AWS.
This may be way off base, depending on your needs, but have you considered a cloud computing platform, like Amazon EC2? They now offer GPU focused instances: https://aws.amazon.com/ec2/instance-types/
This would save you a lot of capital expense, and depending on how much compute time you need, might end up being much cheaper in the long run.
Set up a cross-account access role rather than using the root account credentials. The top-right corner of your console will indicate the name of the account (you choose an arbitrary name + color coding).
https://aws.amazon.com/blogs/aws/new-cross-account-access-in-the-aws-management-console/
It would have taken a few seconds of simple googling and if you had been observing industry news it was a big story at the time. Someone not giving you tons of links to prove it doesn't mean it isn't true and its also apparent you just wanted to be a naysayer without any researching yourself.
You can be skeptical or curious but if you're relying on others to do a simple google search try to avoid saying things are incorrect if you don't have any pre existing knowledge.
https://aws.amazon.com/federal/
Yelp, Netflix, NASA, CIA, FBI, FDA, FINRA, Healthcare.gov, Nokia, Comcast, Conde Nast (and reddit), Intuit, etc.
They're not all listed, some would prefer to fly under the radar (like the FBI/CIA and other DoJ agencies) but a significant number use Amazon Web Services.
Considering that its Microsoft, Amazon, and Google that's moving on this issue... I think its a cloud security bug.
Cloud is different: servers run code from the customers, and rely on chip-level security to make sure that Customer#1 can't see the data of Customer #2. Imagine if you will, that you could see all the memory of all of the other customers by simply buying an AWS node and scanning memory around.
So consumers probably won't have to deal with this bug. On the other hand, Cloud Compute is where the money is right now. So maybe you'll care if you bought AMD or Intel stocks...
IIRC, Overwatch uses Amazon AWS for it's servers. AWS doesn't have servers in Africa. That's why there's not an Africa region. Amazon doesn't think it's profitable to put servers in Africa, since only like 30% of the population even has internet access.
This! Keep all your gear, but stash most of it into a closet somewhere and use only 1-2 servers for physical hardware testing/development. Then, use AWS instances to build up a steady revenue stream. Once you can afford a colo (know that some smaller places will be much cheaper than high-profile datacenters), then ditch AWS and go to physical hardware only. Also check out https://aws.amazon.com/activate/
While Spark may seem shiny, it's overkill for small-medium data science projects. Using it in standalone mode on your local computer to practice thinking in map-reduce isn't a bad idea, but may be hard to build a compelling project out of it.
Spark really is about large scale data. So I'd use it to explore large datasets on AWS. Insight has a post on how to do this - http://blog.insightdatalabs.com/spark-cluster-step-by-step/ - and I'd check out the AWS large datasets collection too - https://aws.amazon.com/public-datasets/
But if you're data is less than 20-30 gigabytes, Spark really is overkill. If anything, figuring out how to write efficient Python (or R, etc) code to analyze ~20 GB of data will force you to be a better engineer & data scientist (over using Spark to easily / quickly process 20 GB of data).
Indeed.
What's interesting is that many of the even worse events seem to boil down to systemic issues that a single employee gets blamed (scapegoated?) for.
For example, data with no backups, that's an issue that was going to reveal itself sooner or later. Just so happened that they'll blame the employee for it rather than WannaCry. But the result is ultimately the same. They lacked the systems and policies to correctly protect key information.
Or this:
> an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
And while credit to Amazon, they didn't scapegoat anyone (that we know of), it just goes to show that a lot of "big deal" problems are systemic in nature. Just quietly waiting to be brought out in a big way.
This is why I think we could all learn a lot from the NTSB's investigations into aircraft crashes. When they look into these things they aren't looking for an individual or scapegoat, they boil the problem down to how the system put that individual into a position where they could screw up (be it poor procedures, poor training, poor equipment, or a million other issues).
Every time a colleague or subordinate makes a mistake, the first question that should get asked is: How could broader department policy have prevented or mitigated this?
Which is, incidentally, what Amazon is likely doing about this. From their postmortem:
> While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.
Sounds like they're adding a --I-understand-that-this-will-destroy-a-whole-zone
flag.
AWS has a quick-start for VPC architecture that you can look at to see how they create a full stack with subnets, route tables, etc. ( https://aws.amazon.com/quickstart/architecture/vpc/). You might also look at the other quick start examples they have available. All of the quick starts have sample templates you can look at and see how they define the details and properties for each resource.
Terraform is (I think) one of the best alternatives to CloudFormation (https://www.terraform.io). You might look at that and see if it does what you need, however, that will then introduce something else to learn.
Hopefully this helps!
What a world when we can use a server (probably) in California to communicate with people all over the Valley to discuss a power outage we're currently experiencing.
I guess since phone lines are separate from power, this sort of thing has been theoretically possible as long as I've been alive, but it sure feels different.
Everyone so far seems to have sugarcoated just how hard it would be to scan an animal and figure out what it is, so I guess I have to be that guy. Trying to do the image recognition yourself would be practically impossible. I would definitely encourage you to read this paper that was posted in thread by Patman128. But let me summarize it. A team of computer science researchers used a sophicated AI running on a super computer and fed it 3.2 million images that were categorized by nearly 70,000 volunteers, and were eventually able to get it to recognize a whole 48 different species with 92% accuracy. This paper was just published in April and AFAIK, this is pretty representative of what the state of the art technology is in that kind of image recognition. 48 species isn't very much in the grand scheme of number of species in any medium sized bio-dome. So even if they were to open source this and allow you to use it, I don't think it would be very close to the end product that you are envisioning. Maybe in a few years, the technology will be there.
Edit: I would agree that this shouldn't discourage you from trying out hard projects first, you do learn more from the harder projects. Unfortunately, a "hard project" vs. this project is more akin to wanting to build a custom car engine in your garage (hard) vs. wanting to build a nuclear fusion device in your garage (no one's figured it out yet, and plenty of people are trying).
Edit 2: If you would like to give it a go anyways using an image recognition service, this might be your best bet. But I'm not sure how well they've trained it for animals.
You can do all of this troubleshooting and mucking around, or you can just look at the free cloudwatch metric: https://aws.amazon.com/blogs/aws/new-burst-balance-metric-for-ec2s-general-purpose-ssd-gp2-volumes/
Also, provisioned iops aren't the only solution. Increasing the size of the EBS volume (over 100 GB) increases your performance threshold.
Docker doesn't really come into play here, you would see the same issue without it. Whatever process he's running on that server is exhausting the available storage throughput.
As always, remember the USE method of troubleshooting. Utilization, Saturation, Errors: http://www.brendangregg.com/usemethod.html
Amazon has a good article on setting up bastion hosts. You may also want to look into auditd. https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/
I think you're thinking of Teleport which unfortunately has been neutered of all enterprise features in the OSS edition (LDAP, SSO, etc.).
> [...] In order to meet the HIPAA requirements applicable to our operating model, AWS aligns our HIPAA risk management program with FedRAMP and NIST 800-53, a higher security standard that maps to the HIPAA security rule. NIST supports this alignment and has issued SP 800-66, "An Introductory Resource Guide for Implementing the HIPAA Security Rule," which documents how NIST 800-53 aligns to the HIPAA Security rule.
Per https://aws.amazon.com/compliance/hipaa-compliance/
Hopefully that answers your question? I'd recommend giving them a call to ask about the details that might apply to your specific environment requirements.
Not true
https://aws.amazon.com/lumberyard/faq/
> Q. Do I have to run my game on AWS?
> No. If you own and operate your own private servers, you do not need to use AWS. You also don’t need to use AWS if your game does not use any servers. For example, if you release a free-standing single‐player or local-only multiplayer game, you pay us nothing
https://aws.amazon.com/lumberyard/
It's CryEngine, but the main reason is to sell cloud computing power on Amazon's network. If the next big game needs AWS to run the servers, Amazon will be coining it in.
Sign up for a Free EC2 instance
When your on it type
sudo apt install unzip
wget http://downloads.rclone.org/rclone-v1.30-linux-amd64.zip
unzip rclone-v1.30-linux-amd64.zip cd rclone-v1.30-linux-amd64
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
Then use by
rclone config
Follow on screen instructions to setup for Google Drive
Then
rclone copy $REMOTENAME:$REMOTEPATH ./
unzip $FILE
rm -rf $FILE rclone-v1.30-linux-amd64.zip
rclone copy ./* $REMOTENAME:/Extract/
Same thing, here. Also in Australia.
In the Personal Health Dashboard, there is now this clarification:
> You recently received an email from us regarding "Free Tier Limit Alert", the forecasted numbers are based on the Service usage for December 2017 and are for the Billing Period December 2017, this does not mean that you will be charged. > > Please access your AWS account to review your service usage and, where necessary, adjust your usage. You can find more information on AWS Free Tier here: https://aws.amazon.com/free/ > > Should you have closed your AWS Account within the last month you can ignore the previously sent email. > > Apologies for any inconvenience caused due to this. >
Unfortunately no, you can't. IIRC ubisoft uses Amazon's servers Link .
So if they dont expand their servers, (which they are apparently), ubi wont either.
Senior IT professional here. You can store objects on AWS GovCloud (government use only) S3 (ultra-reliable object file storage) for $0.0200 per GB. Or, you can put it on Glacier (ultra-reliable tape storage) for $0.006 per GB. Both options are extremely cheap for this application.
If you want to learn don't do it on windows. Linux will make things much easier for you. On windows you'll waste too much time on problems and limitations that the OS throws at you. It's not very developer friendly.
As someone else mentioned before, there's plenty of free resources out there
I'm not sure what kind of cameras they use, but just to get a ballpark estimate lets assume it's 720p @ 30fps which often takes 17Mbps or ~6GB/h. At that rate a 1TB hard drive can store 167 hours of footage. If we assume the cameras are on for 8 hours per day, that means 1 TB is capable of storing 20 days worth of an officer's body cam footage. Now, cameras won't be running for a full 8 hour shift, and you have days off, so we'll call it approximately 1TB per month per officer.
Assuming that they keep the data for a year before erasing it, you'd need to allot 12TB per officer. Storage services like Amazon Glacier cost $0.004/GB per month or $4/TB per month. At 12TB per officer, that brings the total price to $48 per month or about $600 per year per officer.
This doesn't take into account any sort of compression, or the higher rate that sensitive data would probably go for, but should be an okay ballpark estimate. Minneapolis PD has ~800 officers, so that brings the total cost of raw data storage to ~$500,000 per year. Not exactly cheap, but even if I'm off by an order of magnitude, and you add in overhead costs it's still not even close to tens of millions.
For those wondering how to protect against this sort of breach, AWS Config Rules can help out tremendously. It allows you to specify rules matching certain conditions (EBS volumes with "public" bit set, for instance) and then run an action (alert, mitigation,etc.) in response to events that match the rules.
I have had a 25 year career in tech and now am very fortunate that I can pretty easily find work at that pay level. Having come through 2-3 brutal recessions I do not take it for granted and we save a ton.
The S in STEM is tough. If you want to accelerate into more money quickly I'd look to do a pivot into sales (pharma, biotech, healthcare focused tech, etc.). There are also loads of tech companies that focus on the sciences. I was just looking at one today to potentially partner with http://www.verato.com/. I do not know anything about them yet but they may be a good example of the type of company you could work for in order to accelerate earnings.
All big tech companies have Healthcare and Life Sciences vertical people.
Here are AWS and Microsoft's for example.
https://aws.amazon.com/health/
https://enterprise.microsoft.com/en-us/industries/health/
> Amazon S3 Standard and Standard - IA are designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.
https://aws.amazon.com/s3/faqs/
Note durability != availability.
I think it comes down to motivation, time and ownership.
Games have quite a few things against them:
They can be pretty complex which makes them costly in terms of time and effort.
Games do not solve issues as a tool you would use frequently
The people who enjoy making games are not necessarily the people who enjoy playing them. For instance I enjoyed a lot working on my game engine but didn't care much about a story. I was more attracted by technical challenges :)
Games relying on stories have a very limited life time from a player perspective (comparing to my text editor or email processing which I have been using for decades). So what happens after the game was finished? Why would people come back to it?
A tool is a tool. No one cares about the feelings as long as it fulfill a need. But games are also touching on the arts. And people start to think about "their" story, "their" artwork. And also why would they want to implement your vision when they can implement theirs?
All of this makes it difficult to end up with something polished and nice. That said there have always been quite a few tools and framework for open source games such as SDL, PyGame, Crystal Space, Ogre3d, JavaMonkeyEngine and more recently https://aws.amazon.com/lumberyard/
AWS just announced and released Fargate which let's you deploy docker containers to ECS without having to manage a cluster of Ec2 instances which might be something for you to consider: https://aws.amazon.com/blogs/aws/aws-fargate/
I've been working with it now since they released it and it is nice, but a little expensive so definitely have to weigh the cost versus having to manage your own instances. Although keep in mind, depending on your setup, letting ECS handle blue-green deployments requires you to have n+1 instances to deploy a new container version unless you are using dynamic port mapping so that is additional compute cost to weigh when looking at fargate.