Marcos Ortiz

AWS Graviton Weekly # 19: Week from January 6th, 2023 to January 13th, 2023

published3 months ago
7 min read

You're receiving this because you subscribed here OR here

This email may contain affiliate links. I receive a small commission for recommending products I use & love at no extra cost to you.

[Read the browser version right here]

Brought to you in partnership with Notion

Issue # 19: January 6th, 2023 to January, 13th, 2023

Hey Reader

Welcome to Issue # 19 of AWS Graviton Weekly, which will be focused on sharing everything that happened in the past week related to AWS Silicon: from January 6th, 2023 to January 13th, 2023.

This week was very light in terms of content related to AWS Graviton, but still, there are some good gems here.

BTW: some of you asked about some tools and applications to help you save big on your cloud bills, not just Graviton, and today, I have the perfect tools for you, developed by Cristian Măgherușan-Stanciu:

  • AutoSpotting Savings Calculator: this simple calculator will help you to understand how much money you could save if you use this tool made by him
  • EBS Optimizer: this tool automatically converts your GP2, IO1, and IO2 EBS volumes to the more cost-effective GP3 volume type

Time to save big, folks.

Enjoy the content of this week.

Brought to you by Notion

I'm personally using Notion for capturing all ideas/resources and more great things for the AWS Graviton Content Database.

I truly love the power of Notion.

So, I partnered with the team to give you a great offer you can't refuse to start this year with an incredible pack of resources.

Start the year with Notion


Amazon RDS Optimized Reads is now available for up to 2X faster queries on Amazon RDS for MariaDB

Amazon Relational Database Service (Amazon RDS) for MariaDB now supports Optimized Reads for up to 2X faster query processing compared to previous generation instances.
Optimized Read-enabled instances achieve faster query processing by placing temporary tables generated by the MariaDB server on the NVMe SSD-based block-level instance storage that’s physically connected to the host server.
Complex queries that utilize temporary tables, such as queries involving sorts, hash aggregations, high-load joins, and Common Table Expressions (CTEs) can now execute up to 2X faster with Optimized Reads on RDS for MariaDB.
Optimized Reads is available by default on RDS for MariaDB version 10.4.25, 10.5.16, 10.6.7 and higher on Intel-based X2iedn, M5d and R5d instances and AWS Graviton2-based M6gd and R6gd database (DB) instances.
R5d and M5d DB instances provide up to 3,600 GiB of NVMe SSD-based instance storage for low latency, high random I/O and sequential read throughput.
X2iedn, M6gd and R6gd DB instances are built on the AWS Nitro System, and provide up to 3,800 GiB of NVMe-based SSD storage and up to 100 Gbps of network bandwidth.

BlackBerry’s automotive AI platform gets a CES public reveal

BlackBerry’s automotive artificial intelligence platform IVY - co-developed with Amazon Web Services (AWS) - is now pre-integrated on three commercially-available digital cockpit platforms from Bosch and PATEO, designed to enable automakers to rapidly deploy innovative third-party applications to enhance in-vehicle experiences for drivers and passengers alike.
BlackBerry and AWS demonstrated several BlackBerry IVY-powered applications at CES including Bosch's platform in a Jeep Grand Cherokee showing innovative AI-based solutions for predictive maintenance of brake and tyre wear, powered by Compredict, as well as secure in-vehicle payments, powered by CarIQ.
Elsewhere at CES, PATEO's intelligent Digital Cockpit shed light on an EV battery management solution, powered by Electra Vehicles, currently being commercialised in the Chinese market by PATEO; and a virtualised BlackBerry IVY platform solution, powered by AWS Graviton Processors, demonstrated how automakers can rapidly develop ML-based automotive solutions for scene detection and cybersecurity use cases.

Articles and Tutorials

It's the perfect time to try the new T4g new instances, by Ben Groeneveld (Specialist Solutions Architect at Amazon Web Services at AWS)

Interested in modernising your workloads and benefiting from significant performance gains and cost savings?
Try AmazonEC2 t4g.small instances powered by Amazon Web Services (AWS) Graviton2 processors free for up to 750 hours / month until Dec 31st 2023.
Until December 31, 2023, all AWS customers will be enrolled automatically in the T4g free trial as detailed in the AWS Free Tier. During the free-trial period, customers who run a t4g.small instance will automatically get 750 free hours per month deducted from their bill during each month.
What is a T4g Instance?
Amazon EC2 T4g instances are powered by Arm-based AWS Graviton2 processors.
T4g instances are the next generation low cost burstable general purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required.
They deliver up to 40% better PricePerformance over T3 instances and are ideal for running applications with moderate CPU usage that experience temporary spikes in usage.
T4g instances offer a balance of compute, memory, and #network resources for a broad spectrum of general purpose workloads including large scale micro-services, small and medium databases, VirtualDesktops, and business-critical applications.
Developers can also use these instances to run code repositories and build Arm-based applications natively in the cloud, eliminating the need for cross-compilation and emulation, and improving time to market.

If you want to learn more about the T4g instances, read the documentation here, and its respective FAQs here as well

Peter DeSantis Keynote recap – AWS re:Invent 2022, by Scott Erdmann (Cloud Architect at Caylent)

As if re:Invent wasn’t already exciting enough, during the first night of re:Invent 2022, Peter Desantis (SVP of AWS Utility Computing) gave a great talk on the future of cloud computing while also revealing exciting progress on both the hardware and software fronts.
DeSantis took us on a deep dive into what’s going on under the hood with the technologies that power a lot of the AWS services we love and use the most.
He touched on the importance of balance and the difficulties of increasing performance without having to sacrifice cost and security. With many technical challenges to overcome, AWS was able to supercharge their hardware and software to power the next generation of technology that has yet to come.

Accelerate your data exploration and experimentation with the AWS Analytics Reference Architecture library, by Lotfi Mouhib (Solutions Architect at Amazon Web Services) and Sandipan Bhaumik (Senior Specialist Solutions Architect (EMEA) - Analytics)

In this post, we show how a data engineer or IT administrator can use the AWS Analytics Reference Architecture (ARA) to accelerate infrastructure deployment, saving your organization both time and money spent on these data analytics experiments. We use the library to deploy an Amazon Elastic Kubernetes (Amazon EKS) cluster, configure it to use Amazon EMR on EKS, and deploy a virtual cluster and managed endpoints and EMR Studio. You can then either run jobs on the virtual cluster or run exploratory data analysis with Jupyter notebooks on Amazon EMR Studio and Amazon EMR on EKS.
The architecture below represent the infrastructure you will deploy with the AWS Analytics Reference Architecture.

Measure, measure, measure, by Christian Prokopp (Founder at

Measure, measure, measure. I do, and I still get surprised like today. Read to the end for the final number, I was astonished.

🛠️ AWS ARM instances are supposed to perform similarly to their x86 equivalents. Often that's true, and I have verified that with a fleet of machines which process 'X'. However, I recently switched a similar process 'Y' from x86 to ARM, expecting the same.
To my surprise, in this case, the difference is ~70%, i.e. ARM performed 170% throughput of the x86 equivalents (see image).

Slides, Videos and Audio

[VIDEO] [DevOps Days Tel Aviv] Getting ready for the multi-architectural universe by Michael Fischer (Principal Specialist, AWS EC2, Graviton, and Containers)

Getting ready for the multi-architectural universe | Michael Fischer
We’re at the cusp of a CPU revolution, where new processors based on architectures such as ARM have evolved beyond mobile. Today, they are powering mainstream laptops and desktops, and they’re even ready for running production workloads in the Cloud.
Developers are increasingly excited about them because these new CPUs are faster, use less energy, and are cheaper to operate.
Are you ready to take advantage of this? If not, you’re in the right place!
In this talk, we’re going to discuss how you can transform your single-architecture DevOps pipelines into flexible multi-architecture pipelines, all while preserving your sanity. We will talk about parallel builds, cross-compilation, multi-architecture images, and more. By the end of this talk, you will emerge with the confidence to build and deploy to your new, less expensive, greener infrastructure.

[VIDEO] BookMyShow: How BMS Scaled Data Analytics to Handle 28M Ticket Sales with Modern Data Architecture, by Nishant Rathod (Principal Data Architect at BMS) and Priya Jathar (Solution Architect at AWS)

BookMyShow is India's leading entertainment destination with global operations and the one-stop shop for every entertainment need.
This BMS Modern data architecture was built in 4 months using Amazon EMR, Amazon Redshift, Amazon Simple Storage Service, AWS Glue, Amazon QuickSight, and Amazon SageMaker.
They scaled the solution during a blockbuster release and sale of 28 Million tickets in April 2022 and saved 70% cost from the prior analytics solution, by using Amazon EMR transient clusters, Serverless services (Amazon S3, AWS Glue, AWS Lambda, AWS Step Functions), Spot instances, Graviton instance type, and instance right sizing.

[VIDEO] Season 2 Episode 1 - O11Y, ECP & TLA’s, with Karl Robinson (CEO & co-founder at Logicata) and Jonathan Goodall (Lead Cloud Engineer at Logicata)

In the first episode of season 2 Karl & Jon discuss S3's new default encryption, CI/CD with GitLab and EKS Graviton runners, CloudWatch, Serverless observability, go off the deep end ranting about what DevOps actually is, and talk about shredding Vespa's.


If you are actively looking for a new role, you should join our Talent Collective here. It's completely free to the candidates.

And if you are a company looking for new members for your time, you can get access to our Talent Collective here.

There are 18 active candidates ready for interviews:


No events this week

Quote of the week

The new year is always a good time to reflect back and to think about the future…
Scaling chip design is a constant challenge for the semiconductor industry, and a common customer question we get at Annapurna Labs.
Looking at almost a decade of increasing complexity while decreasing feature size, we've gone from 900 million transistors in 2016 on a single product (AWS Nitro), to more than 100 billion today across multiple products (AWS Nitro, Graviton, Inferentia and Trainium) being developed simultaneously.
Had we let our EDA jobs grow longer as the designs became more complex, we would have slowed down innovation. instead, we relied on the Amazon Web Services (AWS) cloud to give us the elasticity we needed, so we could grow our throughput without growing our engineering team at the same rate.

Adi Habusha, Senior Principal Engineer, AWS Graviton Processors Chief Architect at AWS

Source: LinkedIn