Skip to content
IT
GithubLinkedInMastodon

Comparing performance of AWS m7 CPUs

AWS, tech5 min read

CPU

First thing first, this post is not about the best CPU among AWS m7 processor generation according to my personal opinion, taste or whether I'm in team red, blue or whatever color graviton/aws represents. We will have a look at a few test results that I ran and draw a conclusion from it.

One day I had to spin up a new instance that would host a few nodeJS applications for internal use. Next thing I thought - what CPU should it use? Well, I asked for a suggestion and was told to use brand X, because brand Y is slower and brand Z is not supported by our AMI architecture. Last statement is out of question but following computer hardware world for a while now I couldn't take first one as granted, therefore this post is here. I tried to find some actual numbers online but it wasn't straightforward. Why? I don't know, to me it seems very important question to have a definite answer for. AWS itself says that m7(x) is faster than its predecessor by some percentage, but I wanted to know which of these CPUs is the fastest one, as simple as that.

Now, what about the performance testing tools we are going to use? They are very simple - sysbench CPU test as a synthetic test and Apache benchmark as it represents a real world use case. Both of them are quite simple and won't take much time for configuration. In my scenario I only need 2 cores per CPU. Other hardware like RAM size and speed are AWS default option for a given number of cores (8GB for each CPU model). All instances used the same EBS type. Now lets see which m7 instance types I picked up for a comparison:

  • m7a.large (Two 2.6GHz AMD AMD EPYC 9R14 Processors)
  • m7i.large (Two 2.4GHz Intel Xeon Platinum 8488C Processors)
  • m7i-flex.large (Two 2.4GHz Intel Xeon Platinum 8488C Processors)
  • m7g.large (Two 2.6GHz ARMv8 AWS Graviton3)

All configurations besides Graviton ran the same Linux distribution and kernel version. As for Graviton I used Ubuntu 22.04 LTS. Testing tools configuration is the same as well.

Sysbench:

Quick disclaimer about sysbench - it is a tool that can run different types of synthetic tests, one of them is designed for CPUs comparison. It doesn't represent the performance in a real world application, so take results with a grain of salt. In my example I used the following command to run it:

sysbench cpu --cpu-max-prime=20000 --threads=X --validate run

Where threads parameter is either 1 or 2.

Table of results:

Instance TypeCPUCoresScore%Notes
m7a.largeAMD EPYC 9R1411686.92+47.31
m7a.largeAMD EPYC 9R1423325.46+45.23
m7i.largeIntel Xeon Platinum 8488C11217.11+6.28
m7i.largeIntel Xeon Platinum 8488C21272.01-44.45didn't scale with 2 threads
m7i-flex.largeIntel Xeon Platinum 8488C11244.41+8.67
m7i-flex.largeIntel Xeon Platinum 8488C21290.79-43.63didn't scale with 2 threads
m7g.largeAWS Graviton311145.180
m7g.largeAWS Graviton322289.70

A few things to notice:

  • AWS Graviton3 is used as a base for both test cases
  • for some reason using 2 threads with Intel CPUs didn't provide any performance gain, it could be some configuration issue and can be disregarded but interesting to mention
  • m7i-flex.large has the same CPU as m7i.large with the only difference that AWS positions it as more cost effective as it doesn't maintain its full performance potential all the time but rather use it only when needed.

Apache benchmark:

Now, let's do some test that can utilize a real world scenario, like hosting a webserver. For that purpose I installed nginx but didn't touch its config at all as the goal here is not to squeeze every possible extra request per second, but rather check how every instance is going to behave under same load with the same configuration. I sent 10000 requests, 100 of them simultaneously:

ab -n 10000 -c 100 http://localhost:80/

Table of results:

Instance TypeCPURequests per secondTransfer rate (KB/s)%
m7a.largeAMD EPYC 9R1446916.8639127.93+49.6
m7i.largeIntel Xeon Platinum 8488C34119.1028454.79+8.8
m7i-flex.largeIntel Xeon Platinum 8488C33409.7327863.19+6.5
m7g.largeAWS Graviton331358.9726152.890

Again, for performance difference comparison AWS Graviton3 was used as a baseline as it has the lowest score. Not much has changed since sysbench run. All results pretty much the same, with a few percent difference here and there.

On-Demand hourly rate:

Let's also compare current pricing plans for these instances.

Instance TypePrice per hour%
m7a.large$0.1292+41.98
m7i.large$0.11235+23.46
m7i-flex.large$0.10673+17.29
m7g.large$0.0910

A few things to notice:

  • AWS Graviton3 is a baseline again.
  • Region is eu-west-1

Conclusion:

When I looked at the test results - conclusion seemed to be so simple. But, with running cost weighted in it became not so obvious. m7a.large and m7g.large look to be very close in terms of performance to price ratio. It seems fair to pay almost 42% percent more for a CPU that is 48% faster. m7i.large? It's barely faster than m7g.large and losses to both other competitors in price to performance ratio. That's it folks.

UPDATE:

Since the initial testing we also switched from m7i-flex.large to m7a.large to build our web applications and my colleague engineers noticed 25% reduction in build time.

© 2024 by Igor Tereshchenko. All rights reserved.