Tam Kyle, Principal Licensing Consultant, SAM & License Management, Version 1

In the following blog post, Version 1 Software Asset Management Expert Tam Kyle provides sample AWS CPU optimisation scenarios from a cost perspective following Oracle’s recent Cloud licensing update.

It’s not a surprise to most Oracle users that the company is encouraging customers to move towards ‘cloudifying’ wherever possible across their entire IT estate. Indeed, most of the major Cloud players are doing the same, with Amazon and Microsoft Azure leading the charge in terms of the features and functions available.

As part of that encouragement, the recently altered cloud policy document effectively doubles the license cost of providing an Oracle on AWS / Azure installation compared to a standard on-site or Oracle setup, specifically due to the section:

For the purposes of licensing Oracle programs in an Authorized Cloud Environment, customers are required to count as follows:

• Amazon EC2 and RDS – count two vCPUs as equivalent to one Oracle Processor license if hyper-threading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyper-threading is not enabled.

• Microsoft Azure – count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyperthreading is not enabled.

When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable.

In a sense there’s a bit of logic to this – in a 2 thread (vCPU) scenario, it’s possible to have each thread running on a separate physical core, and therefore it’s correct that both cores are included in license calculations in some fashion – and that this is in line with other policy statements on so called ‘hard partitioning’ or sub-capacity licensing – for example the ‘whole-core’ position for Solaris LDoms.

Last month Amazon sought to lessen this effect by introducing the ability to alter the vCPU setting on a group of EC2 instance shapes. At instance creation you can now specify a different number of cores and / or hyperthread count from the default associated with that instance type. So if you have a memory-intensive, CPU-light application you can now arguably have a customised EC2 instance that fits both needs – for example:

An m5.12xlarge instance is by default 24 cores, with hyperthreading enabled – provisioning 48 vCPU.

You can now change the vCPU setting for that instance to 2-48 vCPUs and have hyperthreading on or off (26-48 vCPU available with hyperthreading on) – so the following command run through the aws command line inteface (CLI):

aws ec2 run-instances –image-id ami-ca0135b3 –instance-type m5.12xlarge –cpu-options “CoreCount=6,ThreadsPerCore=2” –key-name MyKeyPair will give you an m5.12xlarge shape but actually running 12 vCPU (6 cores hyperthreaded), whilst

aws ec2 run-instances –image-id ami-ca0135b3 –instance-type m5.12xlarge –cpu-options “CoreCount=12,ThreadsPerCore=1″ –key-name MyKeyPair

will give you the same processor effect – 12 vCPU, but this time it’s across 12 cores not hyperthreading – which may benefit your application at run time over the first example due to the greater physical processing power.

Running the describe-instances CLI command will show you the new options:

…initial output…
“CpuOptions”: {
“CoreCount”: 6,
“ThreadsPerCore”: 2
…later output…


…initial output
“CpuOptions”: {
“CoreCount”: 12,
“ThreadsPerCore”: 1
…later output…

These are for the hyperthreaded (ThreadsPerCore: 2) and non-hyperhtreaded (ThreadsPerCore: 1) variants.

You can of course get similar output by connecting to your instance and running the LSCPU command or similar.

Note the following:

• The cost of the instance DOESN’T alter if you reduce the the cpu usage through this new functionality – you’ll still pay for an m5.12xlarge instance in AWS terms – it’s just that your true cpu consumption (and therefore arguably your Oracle license / support bill) will go down.

• You can currently only specify the options within the AWS CLI, an AWS JDK or the AWS EC2 API.

• You currently can’t see the current allocation from the AWS console – you need to query the CLI or instance itself (by using something like LSCPU).

• You can only do this at instance launch time – you can’t modify after launch – you’ll need to terminate the instance and start again if you want to make use of this function (or create a new customised instance and move your data).

• You don’t get more than the instance default – so you couldn’t specify corecount=32 in the above m5.12xlarge example for instance.

• Changing the instance type (e.g. from an m5 to an m4) after customising the cpu options will reset them – i.e. you’ll go back to the default instance cpu options for the changed instance type

• The cusomised CPU options remain across instance stop, start, reboot.

Amazon have now extended the above capability by introducing similar to RDS – the database Platform as a Service offering – but only for Oracle DB.

Capability and commands are similar to the EC2 scenarios but there are a couple of differences:

• The cost of the instance again DOESN’T alter if you reduce the the CPU usage through this new functionality – you’ll still pay for a db.m4.10xlarge instance in AWS terms as an example – it’s just that your true cpu consumption (and therefore arguably your Oracle bill) will go down. (This applies to BYOL licensing of course. You aren’t going to achieve anything with the license included option – though conceivably that might be a future enhancement).

• You CAN modify the CPU settings after instance launch or during a restoration.

• Again, you can’t achieve more CPU than the instance default

• You can see the current CPU customised allocation on the AWS console (though as of yet we don’t seem to have that capability in the UK), or by using the relevant CLI commands.

AWS CPU Optimisation: On the license cost front what might this look like? Looking at the following 4 possible scenarios:

1. An on-site setup – 12 ‘threads’ provisioned by 6 x86 intel hyperthreaded cores – eg a 1 socket HP DL390 Gen 9 running an E5-2643 v3 chip

2. AWS EC2 default instance – m5.12xlarge – 24 cores, two threads per core – 48 vCPU.

3. AWS EC2 12 ‘threads’ provided by our first attempt at CPU optimising this instance – corecount=6, threadspercore=2.

4. AWS EC2 12 ‘cores’ provided by our second attempt at CPU optimising this instance – corecount=12, threadspercore=1.

Scenario 1 – assuming ‘normal’ licensing (i.e. ignoring nuances) – 6 cores * 0.5 Oracle Processor Core factor for Intel chips = 3 processor licenses.

Scenario 2 – using the Oracle policy gives you 1 processor license per 2 vCPU – so 24 Processor licenses.

Scenario 3 – vCPU is calculated by taking corecount multiplied by threadspercore – so that’s arguably 6*2 = 12 vCPU, and at 2 vCPU per license that’s 6 Processor licenses.

Scenario 4 – vCPU = 12 again, BUT – this is now classed by Amazon as a non-hyperthreaded instance – we’ve switched it off by applying threadspercore=1. So applying the policy this time – which says 1 vCPU = 1 processor license on non-hyperthreaded instances – that means 12 Processor licenses.

That’s a signficant spread of license requirement; minimum of 3 Processors, maximum of 24 Processors.

You would more obviously use scenario 3 here – to retain somethig akin to the on-site setup – and it’s obvious that something isn’t quite right in scenario 4 – it sort of looks the same, but isn’t really – you’re really running across 12 cores, not 6.

However, based on the available documentation (from Amazon) – it’s 12vCPU and no hyperthreading – which needs twice as many licenses as scenario 3 which also has 12 vCPU. What’s happened here is that we’ve taken what would normally be classed by default as a ‘hyperthreadable’ instance, and made that a choice HT or not HT – with potential significant license implications.

At this point there’s no indication of any alteration to the policy by Oracle – though we may see some strengthening of the policy text to suggest that the current policy calculations must be done on the default (rather than the cpu cusomised) instance attributes. For now, it’s based on the vCPU allocation.

For now, this document considers Database Enterprise Edition and products with typical Processor definitions and therefore shouldn’t be applied to Standard Edition products.

The take-away?

Amazon have given yet another set of flexibility options, which in part helps mitigate Oracle’s stance on licensing in non-Oracle cloud. These might provide significant potential to lower the license costs of using AWS EC2 and RDS.

However, you need to ensure that what you assume is the case (license cost calculation based on instance shape) – is in fact the reality (has it been CPU customised?) – or that you’ve correctly altered the cpu options to your advantage. And finally, that you can extract the relevant information to back that up, from the console, cli or raw instance – in the event of an audit or as part of your BAU SAM processes.