Not that I expect streamers to understand this, but that's not how AWS works.
With AWS, you don't typically purchase hosts and then add instances to that single host. You can do that through the Dedicated Hosts service, but that's far from typical and is much more difficult to manage as you need to employ people familiar with its nuances. Usually dedicated hosts are used when an organization has some weird licensing issue; for example, if your Oracle license costs $X per CPU that it could possibly run on (yes, that is one of their pricing models) you may want to use a dedicated host to control that CPU count.
Usually you simply purchase instances and allow Amazon to decide which host they will run on. This makes it essentially impossible for you to run into an overcrowding issue.
The only problem I could see with AWS that would match what you described is if Bluehole was using undersized instance types. They could have used oversized instances for the closed alpha, and since they were only running a few servers the extra cost wasn't a problem. For the EA launch, though, they probably would have switched to instances with a tighter tolerance in order to save money. Those instances are probably just barely powerful enough to accommodate the game server, but they cost a lot less. If you're going to launch an autoscaling infrastructure you want to get the cost just right, because paying an extra $0.10 per hour per server across 200 servers costs you $14,400 a month that you don't need to spend -- but sometimes that 200 scales up to 2,000 or even 20,000, which is good because it means your game is popular, but it's also bad because you're spending a ton of extra money.
TL;DR: it's possible that the AWS instances are undersized, but the streamers probably don't know what they're talking about.
Not strictly true, cloud servers usually have multiple pricing options, and the longer the lease you sign, the cheaper the server.
See https://aws.amazon.com/ec2/pricing/ - scroll down to 'reserved instances'
> metric fuck ton of IRL silicon and budgets in the tens of millions
Have you heard about our lord and saviour AWS?
Far more importantly; You can run an AI on consumer grade GPUs, if more slowly, which can be bought of the shelf; so unless all GPUs from all of time are regulated, you cannot block, only limit in scope.
As a side note: Artificially inteligent virus that hyjacks cryptomining rigs? Infinite GPU compute for cheaps.
Each record is say 100 bytes comprising source, destination, time stamp, and duration. That is about 8TB uncompressed, which could be stored in RAM on 5 X1 EC2 instances.
They could maybe load it into their in house Graph Database.
BTW for AWS you can have dedicated instances. If you have a single point of failure I would suggest using a dedicated instance. In this case your instance is just like any other managed server provider. https://aws.amazon.com/ec2/purchasing-options/dedicated-instances/
With AWS, and maybe Azure, you can pay extra for a dedicated machine. For AWS it's called Dedicated Hosts.
I do agree that this is not the usual way people use them, vast majority of people are using instances that could be on any host shared with random others.
The more likely switch is: People who already use SQL Server, especially if they use it in apps written in C# and running on .NET, will now be able to run their entire stack on Linux.
If it performs comparably, it will, at the very least, save you money on Windows licenses. Which, in some cases, can be a considerable overhead. Especially for something like a database, where you're going to want to run it on the biggest machine you can -- the difference between Windows and Linux is, on many of those options, almost the difference between instance tiers.
Like, you could run on Windows with 64 gigs of RAM, or pay about half as much and run on the same hardware with Linux, or pay about the same (slightly more) and run on Linux with 160 gigs of RAM.
I'm just counting the Windows license. I'm sure many people will already be paying for the Windows + SQL Server license, and for that matter, SQL Server may well cost enough that these hardware differences are relatively small. But they still seem pretty significant to me.
And... I wouldn't be at all surprised if that undercut Oracle.
This already exists:
https://aws.amazon.com/ec2/amd/
If demand for this service goes up, Amazon will simply buy more AMD servers. Over time they will reduce the number of intel servers. Simple supply and demand.
Hey friend, did you completely miss this episode
Introducing Amazon EC2 Instances Featuring AMD EPYC Processors
https://aws.amazon.com/ec2/amd/
​
They're listed on the docs: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t-how-to.html#t3-how-to
They're a bit cheaper than T2, at least according to the instance reservation pricing in the console.
EDIT: https://aws.amazon.com/ec2/instance-types/t3/ (pricing is also up)
Blog: https://aws.amazon.com/blogs/aws/new-t3-instances-burstable-cost-effective-performance/
Just did some, very SWAG, math on this.
Yesterday there were 897k Crucible players. Our assumptions:
This makes for 897k matches. With a match length of 0.1h, we have 89,700 compute hours. We'll assume we can handle 16 matches per CPU (very low tick rate of 10). Using public AWS pricing of General Purpose, which no one actually pays for, much less the pricing ActiBungo could negotiate, we get:
Daily price between $260.13 and $280.31. Monthly between $7,803.90 and $8,409.38. Annually between $95,009.88 and $102,381.34. Or, in $60 sales, 1584 to 1707 games sold.
Scaling wise, we're definitely not talking crazy increases. It also doesn't take in support cost for the infrastructure, but that should be <$1m in total (a couple sysads, netengs, pm, manager...)
Note: This is one day's snapshot extrapolated to a daily average, which is not likely accurate.
>Cloud service providers won’t drop their prices because of cheaper CPU cores. The savings on the CPU cores in the grand scheme of things is minimal.
But you can get AMD cores 10% cheaper on AWS?
Pois é né... se ao menos existe um serviço escalável, que adicione automáticamente mais servidores quando necessário... tipo um AWS, Azure, Digital Ocean... mas infelizmente acho que isso nao existe ainda.. /s
> FP16
Is provided at a larger scale by the P100 and V100. P4000 just needs enough FP16 performance to test an algorithm before dumping it onto a machine-learning focused server.
It's a 'nice to have' but it's really not the focus of the workstation grade GPUs. Because you can just fire up an EC2 P2 Instance at pretty much anytime if you need to run your machine learning algorithm.
>Gaming Drivers
Doesn't need them, it can play games on Quadro drivers just fine and it doesn't experience more than a 1%-2% penalty compared to the GTX 1070.
>AMD is positioning this as a Titan competitor
They absolutely are, which is why it really needs performance on both sides of the fence. It needs to work well enough in professional applications to not slow you down, and it needs to game near enough to the top of the stack to help justify it's enormous pricetag.
VEGA FE right now only has one of those bases covered.
Y'all are missing the point, a little bit, although the answers here are pretty good. There's a huge push right now to integrate FPGAs and CPUs into the same system, often on the same chip.
Amazon F1 AWS FPGA-accelerated instances -- get you an instance that can do both.
Xilinx Zynq ARM + FPGA on one chip
Altera Stratix 10
This was the greatest reason for Intel's purchase of Altera: to be able to use FPGA resources to hugely accelerate certain tasks which are inefficient in software. There are a whole class of problems which are difficult or inefficient to solve in a CPU, which an FPGA can do extremely well. I'm a comms guy, so I think of: LDPC decoding, the Viterbi algorithm for MLSE, long FIR filters. If you can load an FPGA "kernel" as a coprocessor, you can massively increase the throughput of your system -- which is another way of saying you can decrease the hardware cost you need for a given problem.
The "1 GB normal data" to the internet has always been free forever. The post doesn't say "no longer limited to the first 12 months" because it never was limited like that.
See here: https://aws.amazon.com/ec2/pricing/on-demand/ or here: https://aws.amazon.com/s3/pricing/
The first 1GB of egress to the internet is free as part of the individual service's ongoing pricing. It's never been part of the 12 months "Free Tier".
How your business model works should be reflected carefully into your AWS architecture and I have to respectfully agree with others that if you're asking questions like this you are very, very, very behind the curve necessary to make this a successful venture. Even companies with millions of dollars to spend just building out an AWS environment alone get this stuff wrong all the time (and takes years to fix)
The competition is extremely stiff and margins are slim. Most companies that are able to sustain this business in developed countries (perhaps possible in Eastern Europe) don't even do hosting as their primary value - they up-sell to marketing departments and integrate with Salesforce and Pardot, for example, as something important for a company. Look at companies like Pantheon or arguably Squarespace, for example.
Furthermore, you should be able to do some math quickly (< 5 minutes unless your native language isn't English but the AWS website is in many different languages) and realize how much you'd have to charge customers just to break even on costs for an unused Wordpress site without doing tricks like reserved instances [1] - AWS instances are billed by the hour and so are RDS instances. Even for a barebones site you'd be looking at $80 / mo which almost nobody will pay for a basic Wordpress site. So the only way the economics could work out for you is to charge a lot more than $80 / mo for a Wordpress site (which nobody does for just a Wordpress site like I said unless they're truly ignorant and can be swindled - usually like in Pantheon's case it's marketing tools bundled), or you have to stuff a lot of customers onto each RDS database and EC2 instance. This immediately destroys the one-customer-per-account schema that others have been talking about with Control Tower and AWS Organizations.
[1] https://aws.amazon.com/ec2/pricing/ [2] https://aws.amazon.com/rds/mysql/pricing/
If you're only doing this for testing purposes, consider using spot instances. You should receive a significant discount on the price and if Amazon terminates the instance because you're outbid, you won't be charged for the partial hour. You can also specify defined durations with your bid, so it can guarantee the running instance in hourly increments up to 6 hours, with a decent discount still included. I'm not sure what your use case is or how long you're planning to run the instance, but something to consider.
Play around with the spot bid advisor.
My thoughts on this are that you get rid of the small server "doing nothing" and replace that with serverless code - maybe a Lambda triggered on the starting event. When the event fires, the Lambda starts, and launches the big server. The lambda will run for about 3 seconds.
(StackOverflow post showing how to do this in Python)
Your big server is normally non-existent. But when you need to do the work, the Lambda launches the big server as an EC2, maybe from an AMI that you have built, or maybe using a Launch Template. When the big server is done, it stores the results somewhere else (ideally S3), and terminates.
This approach gets around Lambda's 15 minute time limit (you said the big server can take 2 hours) but you also take advantage of Serverless for the "idle" part. The challenge will be capturing the job request, but there are lots of ways of doing this, and lots of events (even external to AWS) that can be used to trigger a Lambda to launch an EC2.
Over time, you can experiment with different instance types. For example, z1d has a 4 GHz processor. Add timing to your process, and monitor the living daylights out of it to gain metrics which you can use to optimize it.
If this is a server you want to open up to the internet, definitely don't host your own server. It's not safe (nor allowed by many ISPs) to act as a server. What you should do instead is use a service like Amazon Web Services Elastic Compute Cloud (AWS EC2) to host a virtual server at a remote location. And this will be far cheaper than trying to build your own computer.
I'm sorry to hear about your friend.
I'm not an expert on AWS because 99% of my experience is in Azure, but I htink your friend had an AWS EC2 instance (https://aws.amazon.com/ec2/instance-types/). This is a virtual server running on AWS. He specified the id for that instance in a configuration. This script then connects to that instance, using his credentials, and runs minecraft on it. That is why the IP changes - the instance id remains the same, but every time you ask AWS to give you a server for that instance, it spins up a new one and gives it to you. Beyond that, no real clue - it's open source; maybe post an issue on the repository explaining the situation and one of the people who has contributed to it will take it over?
What you would need to do to take it over is create your own AWS account, your own EC2 instance, and get the id / credentials for it and set this script up to run with those.
It is possible to stream to multiple devices via IP multicast - the network then copies at each step. However, due to compatibility, billing and security reasons, IP multicast is not available on the general Internet.
Another alternative is using a peer-to-peer network, with the central servers being part of the network and/or allowing conventional downloads on top of it. For instance, Windows update uses this, but also illegal torrents, like those listed by The Pirate Bay. Since mobile apps or web browsers run only while the user is watching the video and many users have restricted or metered upstream capacity, it is not as feasible for HBO.
So what HBO or their technical provider does is indeed simply have a lot of servers. Each server stores a copy of the most-requested videos - that's called caching.
For every device, the server sets up an encrypted connection, encrypts/obfuscates the video so that it's harder to extract as a file (this is called Digital Rights Management, DRM) and sends it to the clients. Compared to other tasks like video recoding, database aggregation, or searches, this is quite easy work for a computer. A 10GBit/s server can serve 2000 users at once, while being about as powerful as a good gaming PC, although of course with a different focus - these servers will have lots of RAM and use graphic cards only for encryption, if at all.
HBO does not need to buy thousands at servers just for the premiere; they can adapt to the demand by renting servers for a couple of hours, using services like Amazon EC2.
Generally you'll find on professional pentests you won't really need GPU horsepower beyond your laptop/PC. My approach with the client is usually to agree that at some point in the near future I would have cracked the hash I've found, so to increase the value of the pentest and to get better value for money, let's just agree I would have gotten a valid password. I then have the client set up a user account with representative access and rights to those of the account I have the hash for, and move on with the pentest.
Also, make sure someone else is paying for the EC2's you're using! Check these out:
https://aws.amazon.com/ec2/instance-types/p3/
To give you an idea of cost, a close friend of mine who heads up the new Red Team at a software vendor who may or may not have been at the centre of a supply chain attack in December, just spent $18,000 on a P3 instance cracking hashes. Again, make sure someone else is paying for it!
I'm always curious about how much the cloud compute would cost for these crazy papers. 92 GPU YEARS. Some quick maths based on aws on demand hourly rate is 92*365*24*3.06 = $2,466,115 (but it appears that based on the "3-yr Reserved Instance Effective Hourly" you get down to $846,216 - I'm using the single gpu instances here: https://aws.amazon.com/ec2/instance-types/p3/)
that's bonkers.
I'm sure nvidia has their own massive clusters but still it's just such a huge magnitude. also wanna note I SUPER appreciate the final page of the paper discussing the routes and methods they pursued and breaking down training cost/time by areas of the paper development. Was super interesting to read
Think of Netflix and Youtube.
If you have 3000x slowdown but save 50% on bandwidth (and assuming CPU-time costs the same as bandwidth), you only need to show the video 6000 times before you have overall net savings.
And that's pretty bad assumptions: CPU time is virtually free these days (lots and lots of idle computers), while bandwidth costs a lot of money for Youtube / Netflix / etc. etc. If Netflix encodes most videos to H.265, and then highly watched videos (defined as anything with more than 5000-ish views) are re-encoded to AV1... Netflix / Youtube / etc. etc. still benefit. Even at 3000x slower encoding speeds.
Bonus points: you can run AV1 encoding at night when spot-price of Amazon instances run cheaper. Or use low-priority "Preemptable" Google Cloud instances.
Bonus points #2: If you write a faster encoder on GPUs, FPGAs, or ASICs, then your cost/benefit analysis changes dramatically. Bandwidth is expected to be the primary cost driver of the future.
Super interesting. The future implications are promising. Once the sounds are categorized well enough, you could say "make me an instrument with these timbre characteristics" and it would do it.
Would love to train it on some very specific sounds (future house elastic bass, hardstyle screech, a whole bunch of supersaws). But from the repository readme:
>Training for both these models is very expensive, and likely difficult for many practical setups. Nevertheless, We've included training code for completeness and transparency. The WaveNet model takes around 10 days on 32 K40 gpus (synchronous) to converge at ~200k iterations. The baseline model takes about 5 days on 6 K40 gpus (asynchronous).
It's completely out of reach for an overwhelming majority of people. A single K40 retails for a little over $3k, nevermind the infrastructure to run more than one synchronously.
If you used EC2 P2 to run Tensorflow it could be just under $1/hr for a single K80 (asynchronous) or about $14/hr for 16 synchronous.
I'm interested to see how it's priced, although I can't imagine it'd be cheap. Off-hand, I'd expect it to cost at least as much as dedicated hosts plus some amount to cover VMware's support / software / licensing.
Still, I imagine it'll be pretty attractive to some just because of this: > I believe one of the strengths of VMware Cloud on AWS service is that it allows administrators, operation teams and architects to use their existing skill set and tools to consume AWS infrastructure.
If you're already built on VMware stuff, this means you basically get a lot of the elasticity of the cloud without having to do anything different. I can see how that's huge, but I'm expecting the pricing to show just how different this is (on the backend) than the public cloud model.
That is going to be fairly expensive. You will not only need a lot of storage and bandwidth, it you need a server with enough transcoding performance.
OVH offers servers with a NVidia gpu for the bargain of $2/hour, not counting storage or network costs.
AWS is cheaper with a g4dn.xlarge only $0.526/hour.
Bandwidth is charged per GB/transfered. Disk space is per GB of total disk allocation.
You don't want to go this route, really. Not only the cost, but because they will shut you down as soon as they find out what you're doing.
I only have 25mbps upstream and I manage to get by just fine. Upgrade your home internet when you can, cloud plex is a much worse option.
In order to run another virtualization platform in AWS (Hyper-V, VMWare, etc) you have to run a bare-metal instance. These are expensive because you're essentially paying for a dedicated server all for yourself. Bare-metal instances (any instance type ending in .metal, like z1d.metal, i3.metal, etc) can range anywhere from $0.41 per hour to $11 per hour depending on the kind of resources you're looking for. https://aws.amazon.com/ec2/pricing/on-demand/
You only pay for the bare metal instance, AWS doesn't care (or even know) how many VMWare servers you're running, all they see is the bare metal EC2 instance that's running.
EC2 pricing works similar to how you described, you're only charged for the time that an instance is running.
Can I ask what your plans are? Why use VMware instead of just using EC2? That would be likely more expensive and you'd lose a lot of the advantage of using AWS native services
Hi - thanks for joining today! On Demand instances always take precedence over Spot regardless of price. Even if your Spot max. price is set higher than On Demand, On Demand will still take priority. Check out this blog post to get a better understanding of the Spot pricing model: https://aws.amazon.com/blogs/aws/amazon-ec2-update-streamlined-access-to-spot-capacity-smooth-price-changes-instance-hibernation/
Regarding Cloudwatch, Cloudwatch events will trigger for every Spot instance interruption with the 2-minute warning. Keep in mind, interruptions happen less than 4% of the time. Check the Spot Instance Advisor page for average interruption rates per instance type: https://aws.amazon.com/ec2/spot/instance-advisor/ - Stephanie
Well, if your "cost" is measured in "time", then yes. But once you start looking at money, the figures get way out-of-whack.
Say an engineer costs you around $150k annually (we're talking salary, benefits, office space, etc.), that five minutes might be $6.25. A pretty wimpy AWS compute machine might run you something like $0.05/hour[0]. Requiring 125 hours to reach that $6.25, your 0.001s of runtime will take 450,000,000 invocations to break even.
That better be a really tight loop for your engineer to think about the cost/benefit there.
https://aws.amazon.com/ec2/instance-types/#instance-type-matrix
/r/quityourbullshit -- 3.3ghz max on the current generation, according to Amazon's own documentation, and thats on their smallest instance size. The big top tier compute instances of the most current generation have an insane amount of cores, and clock at 2.8 tops.
~~Come back with cited sources rather than internet hearsay, kid.~~
Edit: Scroll down, sufficiently counter-smack-downed. I am the shitlord.
META: One tends to not google things that they refer to about twice a week.
> Uploading files is only a small cost
They get from $50/TB to $90/TB for your uploads.
When you upload data to Amazon, they get to offset that in their peering agreements. As in: it costs them only a fraction, but they still sell the outgoing traffic at the same prices. Aka: they get paid when you upload stuff. This is what they get paid: Data Transfer OUT From EC2 To Internet, Data Transfer OUT From Amazon S3 To Internet.
Now think about people suddenly downloading their 10TB or 100TB of stuff OUT from Amazon to move it across the Internet to GDrive. Not only don't they get to offset that traffic, now they actually have to pay for it.
Honestly, I'd say they fucked up big time.
You cannot launch EC2 instances directly from VMDKs stored in Glacier. You must use VM Import to save your images to S3/Glacier. Also check VMware Cloud on AWS
>Then you should definitely have don't inkling
:|
>absolutely massive amount of work and money required to scale for this amount of users
It's actually really not that bad. "On-Demand instances let you pay for compute capacity by the hour with no long-term commitments."
https://calculator.aws/#/createCalculator/EC2DedicatedHosts
An M4 is $2.42 USD/HR in the USA, more elsewhere.
Assuming you're using containers, which to dynamically scale would be a good idea.
https://aws.amazon.com/ecs/pricing/ https://aws.amazon.com/ec2/instance-types/ Even as container only that m4x4 large is $.8/hr
This is the second time in a week you've tried to bullshit your way and "Um ACKAKCHSUALLY" me and failed.
Go away.
Not sure about the C4, but you can deduct a few things for the C5 instances. Info below collected from https://aws.amazon.com/ec2/instance-types:
Cross checking that against the Skylake processors on Wikipedia I find the Xeon Platinum 8168 with 48 threads and all-core turbo of 3.4/3.7 GHz. Two of these match the C5.metal specs. The TDP of one of them is 205 W, so a *full* C5.metal should use about 410W plus a bit of overhead.
One c5.metal fits 24 c5.xlarges, so a single c5.xlarge should do about 17 or 18W under full load.
Currently, Ethereum runs at around 15 transactions per second. This is because of fundamental constraints of Nakamoto consensus. In fact, the main reason there are block times is because Nakamoto consensus is a synchronous consensus protocol. Whereas Snowman consensus, which is used in Athereum, is an asynchronous consensus protocol. In Athereum, there is no minimum block time. This means that Athereum can process transactions as fast as the EVM/database can support. With our current tests, this puts the number around 200-300 tps. However, we have found some places in go-ethereum (which was the basis of the Athereum subnet) which might be able to boost that quite a bit higher. However, that is still a work in progress. The block sizes in Athereum are not changed from the Ethereum's sizes.
For your question about the minimum AVA amount to stake: this is controlled via an on-chain governance mechanism. However, the only purpose of having a minimum amount is to prevent a DoS point on the network. One of the amazing benefits of using Snowman-based consensus, is that we do not need to limit the size of the staking set to achieve fast consensus.
For hardware/bandwidth: we are currently running the Athereum deployment on c5.large AWS instances. So you can read the exact specifications of the current network nodes here.
Edit: link formatting
This is not economical based on what I understand about their server architecture.
Basically, every game runs on a single, dedicated virtual machine in AWS. This machine instance is re-used multiple times. Each time a lobby is ready, it is likely being matched to an available instance (e.g. an instance that just launched or ended a previous game). When you are in lobby, you are connected to this dedicated instance.
From what I understand, the instance type they are using are C4 series (c4.8xlarge due to the 10Gbps networking on this size). At reserved costs, it's ~$1/hr for a Linux instance.
So think about it: let's say an average game lasts 20 minutes and it takes an average of 1 minute to fill the lobby. If there are 1000 concurrent games at a given time, that is the equivalent of an extra 50 game sessions. It may not necessarily translate to an extra 50 servers, but you get the idea. At peak hours, the hourly server cost may be in the thousands of dollars. Bottom line is that queuing players into game server has a server cost associated with it. To minimize server costs, the optimal strategy is to match as many players as possible before connecting the pool to a server instance.
On the other hand, what we know about creative is that multiple instances share a single server (and thus the memory restriction). So its far more economical for Epic to queue players into some "shared" instance rather than the actual -- likely significantly larger and more expensive -- game server instances.
The EC2 pricing page is your friend there:
https://aws.amazon.com/ec2/pricing/on-demand/
Depends on the region, whether you use spot or on demand, etc.
Keep in mind it may be interesting to see if you can run these things with lambda(s) though that depends how much CPU / MEM you need and how long the things will run. (Lambdas run for a max of 5 minutes, and the CPU scales with the Memory you select).
By computing platform, are you talking about EC2?
Because EC2 definitely has several "hardware" choices that you can use. They have different core counts, different clock speeds and different memory amounts.
And there are a metric shit ton of AWS products out there. This is not "one size fits all".
This url format is used by AWS (Amazon Web Services), one of the largest providers of cloud infrastructure in the world, and more to be more precise, by servers running on their EC2 platform (elastic compute cloud). What that means is that Mozilla is running some servers on their platform, and that your browser is talking to it, that's all.
Do not confuse Amazon Web Services with Amazon the retail company, they are the two very distinct branches of the same company, but their business model is entirely different. You have nothing to fear from AWS regarding your privacy.
I myself have used their platform extensively for a lot of my development work and have always found them to be a very good provider.
There's two ways:
Use MATLAB's cloud center: http://www.mathworks.com/products/parallel-computing/matlab-parallel-cloud/. You configure via Mathworks' website what Amazon compute instance you want. You can choose ones that have GPUs attached. This is very simple to do: the only downside is that you pay both Amazon (for using their computers) and Mathworks (for the convenience).
A second, slightly cheaper and more complicated option is to setup your own Amazon instance: https://aws.amazon.com/ec2/instance-types/. This works like remote desktoping into any standard computer. You can install anything you want on your instance, including Matlab. Also, you can dynamically change your instance. The cheapest way to proceed is to use a cheap instance to install all your software, and then switch to the more expensive GPU instance when you want to use it. I also install Dropbox on my instance, so all my code syncs when I login to my Amazon instance that I want to try. There's a small learning curve to setting this up, but once it's running, it's painless. It's also cheaper since you do not have to pay Mathworks as in the first option.
Hope this helps.
A couple real-world examples:
Looks like there is a discrepancy about the memory sizes. The pricing page says 128 GB for the c5.12xlarge and 256 GB for the c5.24xlarge (and c5.metal), but the blog post says 96 GB for the c5.12xlarge and 192 GB for the c5.24xlarge (and c5.metal).
I'm running top
on a c5.12xlarge and I'm showing 96 GB.
If you have "Unlimited Mode" on, when you run out of credits it will maintain performance and just charge you. If this is the case you can see it on your bill (Elastic Compute Cloud->Region->Amazon Elastic Compute Cloud T2CPUCredits). Pricing details are: https://aws.amazon.com/ec2/pricing/on-demand/#T2/T3_Unlimited_Mode_Pricing
I would suggest filtering on the tag you've got grouped by ("customer:1049"), then group by "Usage Type", and switch the view from Monthly to Daily. You'll then be able to see what is the cost that is varying.
https://aws.amazon.com/ec2/pricing/on-demand/
I think I'd take the EPYC (m5a) instead? The 4-core EPYC is probably 2-core / 4-threads, while the ARM is 8-real (but slower) cores. I dunno, the pricing seems to be relatively weak, unless you really benefit from lots of little-cores (ex: video servers).
You’d want to pay for an AWS. The other issue you have is that when your customer tries to download your large files, the servers will most likely make things very slow. Having a dedicated amazon server allows you to switch host locations making things faster.
If you're using a GPU to speed things up you'll want to pick one that specialises in CPU/GPU balance.
I'd give this page a good read and understand what each of the classes of ec2 instances offer, pick some candidate systems and run some benchmarks to see what works best for you (to get you best value for money).
... and when you're not using it (assuming it's going to be a beefier server), stop the instance.
That certainly isn't the expected behavior, at the very least you should be recieiving the two minute warning. Over 95% of the time customers self-terminate instances (e.g aren't interrupted at all). Are the instances you're launching showing up as >20% interruption rate on the Spot Instance Advisor? https://aws.amazon.com/ec2/spot/instance-advisor/
First, it's not just traffic. Those are requests that your server must answer, so it's cpu usage, db access, etc. Cache can help, of course, but not always.
But I don't get what do you mean with "AWS doesn't charge you on simultaneous traffic". Nobody said the issue was with it being simultaneous to anything. AWS bills you based on the amount of traffic in and out from your instances. You can see an example in the EC2 (the most popular one) pricing page: https://aws.amazon.com/ec2/pricing/on-demand/. So more traffic, means higher bills.
You can try out AWS EC2, the free tier server is pretty much free for a year, and if you get more traffic you can always upgrade.
It does require some knowledge of how things work though.
Docker doesn't really help here because the host EC2 instance (server) would still be running, even if the teamspeak software was stopped when the last client left. EC2 charges by the second that the instance is online, regardless of what software is running, so you need to shut down the instance to save money.
EC2 does have an API to allow programatic control of instances (servers), so you could write a program to periodically poll teamspeak and shut down your instance if no one is online (assuming you're not using the instance for anything else).
Bringing it back up is more difficult because if the server is currently down, there's nothing to poll, and nothing for the first user to connect to. If there's some other way you can tell when to start up (a webpage with a "start my teamspeak instance", an email handler or API call to the game to tell when someone from your list of users is online) then you could run that somewhere (not on your EC2 server, because that will be turned off at this point - perhaps AWS lambda?) to periodically check and boot the EC2 instance & teamspeak when required.
It's probably quite a bit of work to write yourself and get it all working reliably though if it's just for personal use. Have you considered using / trying a smaller, less powerful type of EC2 instance to save money?
Kind of. The EC2 instance types page lists the network performance of each type (scroll down about 1/3 of the way, to the last table).
However, for anything less than 10Gbit, the instances are only rated as "low", "moderate" or "high". The values of these numbers varies by instance type and instance generation, but very roughly, low =~ 50Mbit/sec, moderate =~ 500Mbit/sec and high =~ 1Gbit/sec.
There's a good article from a few years ago that did some investigation and breakdown.
Oh certainly. And to clarify further: dedicated hosts are not the same thing as dedicated instances.
But my point was to clarify that with normal instances you don't have to pay the (expensive) SQL Server hourly rate if you have a SQL Server license. You just transfer your license over and pay the Windows rate.
The cost is going to be largely variable, dependent on what you use to host, and what you can pay upfront.
You almost certainly don't want to go the traditional route of putting hardware in a datacenter, that's wicked expensive.
More likely, look into amazon's web services. You can get a year free trial to learn how to use it, and you can do an extraordinary amount of things for literally pennies an hour.
A small server on AWS, if you contract for a year, will cost you like $137, and the bandwidth costs for their S3 service are like 9 cents per GB out to the internet.
Basically you can get started for very little money. Even if you don't want to put up the money to buy reserved server time, you can do on demand, play around with it, and it will cost you in the tens of dollars per month, less if you don't run the server 24/7.
You can also set things up to be elastic to demand, so if more people are playing, you spin up more servers, and as demand falls off, servers shut down. It's pretty nice.
Yeah... anywhere from a few pennies to a handful of dollars per day.
https://aws.amazon.com/ec2/pricing/ (should be noted they're on google's cloud, AWS' pricing page is easier to read w/o experience imo)
The issue isn't server cost, it's poor engineering
Have you looked into Amazon Web Services? It lets you run R on their supercomputers.
Here's a tutorial for how to do this. It's less complicated than it looks: https://www.youtube.com/watch?v=NQu3ugUkYTk
Pricing is pretty reasonable: https://aws.amazon.com/ec2/pricing/
>It’s not reserving capacity
Unless you choose the option to specify an AZ, then it does reserve capacity:
https://aws.amazon.com/ec2/pricing/reserved-instances/
>If an Availability Zone is specified, EC2 reserves capacity matching the attributes of the RI. The capacity reservation of an RI is automatically utilized by running instances matching these attributes. > >You can also choose to forego the capacity reservation and purchase an RI that is scoped to a region
Realistically the biggest hurdle there is actually getting the source code to an app. I've built and run a couple apps from source (one was to get around AT&Ts ridiculous hot-spot restriction a decade ago). These days most of that process could automated in AWS or MacStadium for just a few bucks.
It's highly unlikely Epic or any other money-hungry developer would release their source to get around Apple's restrictions, and there's no way I would trust sideloading any of their apps unless they did. The loot boxes have already proven to me they will treat their customers like crap for a dollar.
Apple's walled garden isn't perfect, but for the most they have earned my trust, and I really like that I don't have to worry about my kids messing up their phones with malware, spyware or adware.
From https://aws.amazon.com/ec2/instance-types/t3/
"T3a instances feature the AMD EPYC 7000 series processor with an all core turbo clock speed of up to 2.5 GHz. T3a offers up to 10% savings for customers who are looking to further cost optimize their Amazon EC2 compute environments."
The pricing in your article from 2010 is most likely incorrect for today, however, even if the article was recent, both companies have negotiated pricing for enterprise and the quotes I have seen, oracle was more expensive. If you want to use either option you are going to pay for it.
Here is a nice video explaining the ROCm https://youtu.be/5D-k1XWW4ys
Thanks for the feedback. We recommend you to use fault tolerant workloads on Spot. You can also use tools like Spot Fleet or EC2 Fleet with as many instance and AZs so that Fleet can launch new instances upon interruption automatically. Check out this post where a clemson university got 1.3M vCPUs on Spot by using Spot Fleet with diversified strategy. https://aws.amazon.com/blogs/aws/natural-language-processing-at-clemson-university-1-1-million-vcpus-ec2-spot-instances/. You can also refer to the Spot interruption metrics here - https://aws.amazon.com/ec2/spot/instance-advisor/
Yes it is very affordable, when you sign up you should get some credits for free. If you are still a student you can get additional credits for free by signing up for the GitHub student package. Use Spot instances which let you bid on computational time, pricing here https://aws.amazon.com/ec2/spot/pricing/ for example the r4.8xlarge is $0.23 per hour for 32 cores and 244gb of RAM currently, this will fluctuate.
It is pretty easy to get AWS running if you aren't very computational. If you have your pipeline in a workflow language already check to make sure they don't support provisioning AWS resources as that will make it even easier.
you are talking about their generic tiers
https://aws.amazon.com/ec2/instance-types/ shows most of the specs. Besides the T2 tier they list the exact cpus. If they are using the T2 tiers then yeah thats one of the many problems since they are designed for burst performance. The better tiers have plenty of power to handle 100 man servers. I will agree that the network performance of the instances might be one of the issues as i've only ever benchmarked throughput (which is very good especially when you get to the tiers running on the 10gbit) and not latency
I'm not sure what games are or aren't using systems like this, but you'd be surprised, most games are probably just sitting with idle servers. (Normally the server build of the game will be restricted to a single core and consume few resources so you can stuff many of them on the same machine)
You want to use something like https://aws.amazon.com/ec2/pricing/ or https://azure.microsoft.com/en-gb/ or https://cloud.google.com/compute/ - each of them have APIs to let you spin up servers.
They can only go where Amazon goes. What they use is an Amazon Web Services product called Elastic Compute Cloud, or EC2. I'm going by memory here but I'm pretty sure this is how the servers were setup last time I checked. I might have USSouth and USMidwest backwards. http://i.imgur.com/dxctabq.png
You'll notice some gaps: they don't use (at last check) either Brazil or Australia servers. Why? It's twice as expensive as the others. My guess is Kabam did the math and the number of players in these areas who would benefit from a more geographically local server wasn't enough to justify the additional expense. In other words, not enough paying players in these areas. I'm sure Deca will revisit this now and then and make adjustments as necessary, but for now i'm almost certain they're going to stick with the last configuration setup by Kabam.
Side note: when the game uses your "Best Server", it computes your location based on your IP address, and connects you to a given server based on a set of rules. This server doesn't necessarily mean it's the closest to you, but rather everyone in your geographical area will experience more or less the same performance (lag, etc).
T3 instances are over provisioned though and the docs states specifically that the host has 48 physical cores, which is almost definitely 2x 24 Core Xeon CPUs for the T3 instance family. Compare with the M5 dedicated host which can support exactly one m5.24xlarge or m5.metal with 96 vCPUs (threads) on the same 48 physical cores.
Look here for the acutally important info: https://aws.amazon.com/ec2/instance-types/dl1/
The only instance available costs $13 per hour. You need to already be rich to use this.
So I would recommend aws, but p much any cloud provider will do.
Note: If you have your code in git this is a fairly easy process. If not you’re gonna have to fiddle with ssh/scp/sftp to get your code on to the system.
Steps:
>t3 xlarge
https://aws.amazon.com/ec2/instance-types/t3/
It's an Amazon AWS instance type.
Maybe the point is to build a PC that is at least as powerful as one of those Amazon instances? So you could run your shit on the PC instead of AWS or something?
You would not generally use spot instances for running a website without downtime. They can be terminated on you.
Maybe look at Lightsail?
Those are good if you've got stuff that runs in parallel but the Z1D has better single core performance, which is what you'd want if your workflow is single threaded.
https://aws.amazon.com/ec2/instance-types/z1d/
The M5Zn clocks faster, but I'm not sure if it performs better - you may have to benchmark and see.
Cost = hourly * days(365) *hours(24)
Instance Type | Cost per year |
---|---|
a1.medium | $223.38 |
a1.large | $446.76 |
a1.xlarge | $893.52 |
a1.2xlarge | $1,787.04 |
a1.4xlarge | $3,574.08 |
a1.metal | $3,574.08 |
t4g.nano | $36.79 |
t4g.micro | $73.58 |
t4g.small | $147.17 |
t4g.medium | $294.34 |
t4g.large | $588.67 |
t4g.xlarge | $1,177.34 |
t4g.2xlarge | $2,354.69 |
t3.nano | $45.55 |
t3.micro | $91.10 |
t3.small | $182.21 |
t3.medium | $364.42 |
t3.large | $728.83 |
t3.xlarge | $1,457.66 |
t3.2xlarge | $2,915.33 |
I think the prices for small to mid size jobs still favor doing it locally. For example let’s compare a single V100 to a 3090 rig. A V100 and 3090 have similar performance I’d guess no more than ~+-10% for most loads.
For a V100 with 16GB of memory on AWS it’s ~$3 per hour on demand and ~$1 per hour to reserve one. https://aws.amazon.com/ec2/instance-types/p3/
So, let’s say a 3090 rig is $3000 in total. If you on demand at $3 the break even point is 1000 hours or 42 days of training. If you choose to reserve then it’s 3000 hours or 125 days of reserved time.
That said the simplicity and flexibility of cloud computing can be a major bonus and there are some tasks which aren’t feasible to set up locally.
Spot instances are cheap because you're buying leftover capacity from AWS, but they can turn your server off at any time. Probably not a good choice for algotrading where you want to be online 24/7. You'd have to use on-demand then. I use AWS for work and looked into it, but decided not to do it in the end because they are so expensive. For businesses that might be worth it because they also have a ton of enterprise features built in, but you won't need those as a single individual. I have a VPS with OVHcloud, but I've heard good things about digital ocean too.
I haven't heard of any tools for that but you can use an instance that has a widely fluctuating spot price, such as c1.xlarge in the us-east-1d region. You can use the spot pricing history to check the current price and if it's rising. In order to force the spot reclaim to initiate faster, just set your price to the current amount it's at exactly, and since the spot price for c1.xlarge is rising you'll get a spot reclaim notice relatively fast.
You can also check on https://aws.amazon.com/ec2/spot/instance-advisor/ to get an instance with the highest chance of termination
Looks pretty similar to AWS machine types. Couldn't find the exact match: https://aws.amazon.com/ec2/instance-types/g4/?nc1=h_ls
The machine type that GFN uses is likely custom (1GPU, 4/6/8 vCPU, whatever memory configuration).
i guess you could use AWS because it says something about "a <strong>m5.large</strong> (https://aws.amazon.com/ec2/instance-types/m5/) in AWS" ...that would sure be better than tying up your home connection 24/7 i'd think...
Problem with something like AWS is that you still need a good enough graphics card to render (unless I've been taught wrong) even if you increased memory. (I think) there should be some Windows images available on AWS, if so get one and look into Adobe Media Encoder. Will still need to find a way to upload footage though
Yeah.. cuz this math is definitely horseshit. If "all in compute costs" were <$.01 per hour... we would have a completely different story playing out in public cloud.
Look at this: https://aws.amazon.com/ec2/pricing/on-demand/
Amazon charges $.03 to $6.53 per hour just for the compute costs of a server. That doesn't include storage, licensing, bandwidth egress (huge for Netflix), support, load-balancing, DNS, security, SNS costs, etc. The list goes on and on. And, EC2 instances are cheaper than a lot of customers can do internally.
I don't know how much of Netflix application runs on EC2 vs. K8s/Docker...but, since they are older than most streaming companies...I wouldn't be surprised if a lot is still just EC2 instances. And, I don't have any more reason to believe that Netflix uses 1 server to service 10 customers than I do to believe that they use 10 instances to service 1 customer. Actually, their application is probably a bunch of micro-services streaming individual components to a single session.
If I had to bet... the 4 miles estimate was a lot closer than the $.01 estimate. But, whatever helps everyone feel good, I suppose.
If you can virtualize the ERP system already, there's nothing stopping you from moving it to Azure or AWS. Converting a VMDK to something that can boot on AWS (for example) is a pretty well understood process: https://aws.amazon.com/ec2/vm-import/
I just went through snowflake's pricing, but I'm not sure if it's too expensive or just my math being completely off.
The snowflake pricing page says that for AWS / US west it's 2$ per credit. Their credits explanation page says that a x-small cluster (i.e. 1 server) for 1 hour would be 1 credit. So that's 2$/hr
for 1 server. On the AWS pricing page, a t3.medium is 0.04$/hr, a m4.10xlarge (vcpu 40, ram 124.5) would be 2$/hr
. Are they just using big machines, is it overpriced, or is my math wrong?
I'd factor in cost savings with Reserved Instances. This will be a big saver if you can't run any auto-scaling/spot instances.
More info here: https://aws.amazon.com/ec2/pricing/reserved-instances/
All the instance types are optimized for different things. You're asking a vague question.
First, figure out if you need CPU, memory, or what. Figure out of you can scale horizontally instead of vertically. In other words, do you have a distributed load, and can you add more cheap machines instead of making the machines you have bigger and more expensive?
Read through this and try to learn about instance types: https://aws.amazon.com/ec2/instance-types/
Try to get some metrics on what your bottlenecks are, and then try a few instance types to see what you get. The bottom line is cost, so once you find a few different instance types that work for you, use the cost calculator to figure out which one that makes the most sense: https://calculator.s3.amazonaws.com/index.html
You might find that the best instance type for your use-case also costs $20k a month, and you might want to work a little less efficiently for $200 a month.
I try to be clear and upfront about mining always, and if people are still afraid, that's on them for not understanding. (I legit even tell them how to block it).
I wont give exact figures, but streaming for a small number of people at low quality is easy. But at higher resolutions and audiences, you are sending more data and need additional servers. As an example, you can see AWS data transfer prices at the bottom.
It's a lot worse than people think. Ec2 has labeled these A1 so these are the first instance people see in the instance type and instance pricing pages. In the instance type page Amazon boldly makes a claim that this is significantly cheaper - an high level employee said it's roughly 33-40 percent cheaper.
With these instances you get a physical core where in comparable price range you can only get time slice of a core.
Instance page https://aws.amazon.com/ec2/instance-types/
The other important impact is all clouds today had the same offerings. This is the first truly differentiated offering for EC2
I’m not an expert but I believe it’s similar to amazon’s EC2 G3 servers [here].(https://aws.amazon.com/ec2/instance-types/g3/)
You will still need GPU because the users will need to draw and stream graphics online and onto their own monitors at home.
I might be wrong, but thanks for replying! Much appreciated
I hope have, that's the entire selling point of the c5d instances.
"Amazon EC2 C5d instances deliver C5 instances equipped with local NVMe-based SSD block level storage physically connected to the host server."
https://aws.amazon.com/about-aws/whats-new/2018/05/introducing-amazon-ec2-c5d-instances/
if they only turned on the instance for the test and then shut it off as soon as it was finished, probably not much. AWS on-demand pricing for that instance is only $3.06/hr.
Every instance has Cloudwatch metrics including status checks. You can create alarms based on that.
You should try to operate your application so you can automate recovery. Two great options that let AWS automatically bring up new instances in the event of a failure are Autoscaling Groups of one or more instances and EC2 Autorecovery for single instances.
For the on-demand price of an F1 instance, go to this page, choose a region where F1 instances are available, make sure the Linux tab is selected, and scroll to the bottom. In US West (Oregon), you'll see that the cheapest F1 instance is $1.65 per hour. (F1 instances are Linux only, so if you're looking under Windows, the instances don't show up. Also they only show up in certain regions, such as US West (Oregon).)
You can attach an Elastic GPU, but only on instance creation. You would need to essentially terminate your existing instance then then attach it, then terminate when you are done with it. For that reason it may be better to just spin up a GPU based instance for the work.
That said though, if you are working with C5.2xlarge instances only because you are reserved or something... it is possible.
Amazon is pretty clear about this on their EC2 pricing page.
https://aws.amazon.com/ec2/pricing/on-demand/
Ingress into EC2 and S3 is free.
Egress from EC2 to S3 is free if the S3 bucket is in the same region.
Indeed - I’ve seen vSphere in almost all of the enterprises I’ve worked in. Most AWS instances are still Xen though - AFAIK only the C5 and M5 instance types are running on their version of KVM (the “Nitro Hypervisor”). Google Compute Engine (GCE) uses KVM though
It seems to me that it would almost always be worthwhile to rent GPU time than to buy a GPU. A Nvidia k80 looks like it would cost ~$4000. Amazon's p2.xlarge costs $.90/hr (or less with spot/RI). That means you would need > 4,500 hours worth of usage for it to be worthwhile. If you were going to use the GPU for 3 years you would have to average > 4 hours a day of running ML experiments, which I personally don't do but I imagine some average.
AWS also do something called reserved instances, think of them like mobile phone contracts. You pay upfront but receive a fairly large saving, plus if you're looking to move to a different host you can sell the remainder: https://aws.amazon.com/ec2/pricing/reserved-instances/
if it isnt a container and a true vm on Proxmox you should be able to follow this https://aws.amazon.com/ec2/vm-import/ . the other way would be to create a RHEL VM on AWS and then create a backup of the proxmox vm and migrate it over that way. something like a TAR backup can do it.
Pricing is up now, but for the lazy:
c5.large 2 8 4 EBS Only $0.085 per Hour
c5.xlarge 4 16 8 EBS Only $0.17 per Hour
c5.2xlarge 8 31 16 EBS Only $0.34 per Hour
c5.4xlarge 16 62 32 EBS Only $0.68 per Hour
c5.9xlarge 36 139 72 EBS Only $1.53 per Hour