AWS – EBS gp3

AWS announced the general availability of EBS gp3 which is 20% cheaper then gp2 and allows you to provision performance (IOPS and throughput) independent of storage capacity.

Baseline performance on a 1Tb gp2 volume is 3000 IOPS and 250 mb/s throughput.

Baseline performance on gp3 is 3000 IOPS and 125mb/s throughput, regardless of volume size.

Provisioning gp3 performance will cost you a fee.

Example:

Detailed pricing information available at https://aws.amazon.com/ebs/pricing/

So, from now on, instead of using gp2, just go with gp3 and use more disks so you can reach the same throughput as gp2 and save a few (or a lot) of money.

Check the instance type volume limits here so you can better understand how many volumes you need to reach instance max performance.

We have started moving some volumes from gp2 to gp3 and seeing some disk latency increase so I recommend you doing it on most idle periods.

AWS cli for rds reports

Another quick blog post on AWS stuff.

You can query your RDS metadata information using aws cli.

This is a very useful approach when you manage hundreds of servers and need to build a report.

Here is the command line I’ve got to retrieve the database name, license model, DB engine and DB version.

aws rds describe-db-instances --region us-east-1 --query "*[].{DBInstanceIdentifier:DBInstanceIdentifier,LicenseModel:LicenseModel,Engine:Engine,EngineVersion:EngineVersion}" --output table

You can find other rds cli options here.

AWS Nitro – volume id and device name

Hi all,

Quick blog post about EBS on AWS nitro instances.

When working with AWS Nitro instances your EBS volumes will be exposed as NVMe block devices, i.e nvme0n1/nvme1n1, etc, regardless what your input is when provisioning them.

But you can use the nvme tool to map the NVMe device name to the actual name you have provided:

[ec2-user ~]$ sudo nvme id-ctrl -v /dev/nvme1n1
NVME Identify Controller:
vid : 0x1d0f
ssvid : 0x1d0f
sn : vol01234567890abcdef
mn : Amazon Elastic Block Store
…
0000: 2f 64 65 76 2f 73 64 6a 20 20 20 20 20 20 20 20 "/dev/sdf…"

So in this case I’ve used sdf as block volume name and ended up with nvme1n1 on my instance.

I highly recommend you to read the docs here and watch a cool video.

Data transfer from AWS Oracle RDS to S3

It looks simple, right ? It is but there are a lot to do before you can actually perform the copy.

Prerequisits:

  • Create an IAM policy that gives RDS read/write/list permissions to the S3 bucket
  • Create an IAM role that gives RDS access to the S3 bucket
  • Associate the IAM role to the DB instance
  • Create a new Option Group or associate the S3_INTEGRATION option to an existing one

You can check the details on how to perform theses steps here.

OK, now you can perform the data transfer running, for example, the command below:

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
       p_bucket_name    =>  'mybucket', 
       p_prefix         =>  '', 
       p_s3_prefix      =>  '', 
       p_directory_name =>  'DATA_PUMP_DIR') 
    AS TASK_ID FROM DUAL; 

It will copy all files on DATA_PUMP_DIR directory to S3 bucket mybucket

This command will provide a task_id that will be useful to monitor the transfer status.

You can rely on AWS RDS Events at the console or the store procedure below to monitor the transfer job.

 SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<task_id>.log')); 

Example:

Or at the AWS Console: