Amazon Web Services Now ‘Understand’ Blockchain Potential, Further Commit to Machine Learning With Inference Chip and the UK Ministry of Justice Shares Best Practices from Major Cloud Migration

This year’s AWS re:Invent conference took place over the past week in Las Vegas, finishing up on Friday the 30th of November. Version 1 representatives were in attendance and reporting back from AWS Re:Invent to bring you the most important news and updates.

Here is everything you need to know about AWS re:Invent for 2018:

  1. The Collaborative Effort in a Critical Migration Programme by Version 1, AWS and the Legal Aid Agency Discussed by Phil Eedes, Technical Architect with the UK Ministry of Justice

Based on an exciting and critically important large-scale Cloud migration programme for the Legal Aid Agency executed in partnership with AWS and Version 1, Phil Eedes, Technical Architect with the UK Ministry of Justice shared the Best Practices for Running Oracle Databases on Amazon RDS at re:Invent.

In the talk Best Practices for Running Oracle Databases on Amazon (Phil Eedes begins speaking at 44 minutes) Phil Eedes stated that:

“The Legal Aid Agency operates a £1.7-billion-pound fund that provides legal aid to the people that most need it. The applications and data transactions really matter to people’s lives at a time when they’re quite vulnerable. So, what did we do? We had help from AWS and Version 1 to help us successfully migrate a large proportion of the Legal Aid Agency’s core business applications into AWS.”

Watch both Michael Barras of Amazon Web Services and Phil Eedes of the UK Ministry of Justice delivering the talk on ‘Best Practices for Running Oracle Databases on Amazon’ here:

2. Amazon Web Services “Understand the Customer Need” for Blockchain

Amazon Web Services made significant Blockchain announcements at re:Invent this week. Two new Blockchain services were announced: Amazon Quantum Ledger Database (QLDB) and Amazon Managed Blockchain. AWS stated that “Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable Blockchain networks using popular open source frameworks Hyperledger Fabric & Ethereum.” They said that Amazon Quantum Ledger Database (QLDB) is a “purpose-built ledger database that provides a complete and verifiable history of application data changes.”

This announcement signifies a change in mindset for the organisation, as some would consider AWS quite ‘late to the game’ with Blockchain uptake and advocacy.

During the event,  AWS CEO Andy Jassy addressed this perceived delay by touching on why the launch of blockchain service services didn’t come any sooner for Amazon Web Services. He explained that AWS genuinely didn’t understand what the ‘real customer need’ was for Blockchain immediately, stating that “the culture inside AWS is that we don’t build things for optics.”

If this new understanding and commitment to resources is now taking place within AWS, it will be interesting for Version 1 and our customers to see how AWS will work to place tools and services related to Blockchain in the hands of developers and customers at scale in the future.

3. Machine Learning to Remain a Key Focus for AWS

There is a core mission within AWS to “Put machine learning in the hands of every developer.” This commitment to Machine Learning is ongoing and indicated by the company’s continued efforts to build on the success of SageMaker, stating that it is being used by over 10,000 customers despite only existing for one year.

The AWS Inferentia inference chip (as mentioned in our product and features roundup below) signified another step in the direction of Machine Learning commitment for Amazon Web Services.

Andy Jassy reported that “more machine learning happens on AWS than anywhere else” at the conference, and interestingly, Swami Sivasubramanian, VP of AI & Machine Learning at Amazon Web Services referenced that AWS customer Intuit “used Sagemaker to accelerate the process of developing machine learning models, for personalisation, for fraud detection, and they reported that what used to take them 6 months to build a machine learning model from end-to-end has been cut down to less than one week resulting in a huge productivity gain.” Sivasubramanian also referenced that the Machine Learning practice at Amazon web service succeeded in launching 200 product features and services since this time last year.

Version 1 is heavily invested in projects, training and innovation development involving Machine Learning, AI and the potential they hold for our customers’ businesses. We look forward to seeing further commitment and investment by AWS in the coming year, as the past year’s commitment in Machine Learning has had great potential and opportunities for our customers.

4. Products, Services, Features and Functionality: re:Invent Roundup

Last but not least, here is our Products, Services, Features and Functionality: re:Invent Roundup. Across five days in Las Vegas, there were many significant announcements and developments for our consultants to take on board for our customers’ businesses.

Below is a brief roundup of some of the most significant news we will be sharing and exploring with our customers:


Andy Jassy invited Pat Gelsinger of VM onstage Wednesday to launch AWS Outposts – the new on-premises hardware Amazon Web Services has developed alongside VMWare. Jassy referenced how Amazon Web Services “tried to reimagine what customers really wanted when running in hybrid mode and developed AWS Outposts.” AWS Outposts infrastructure is fully managed, maintained, and supported by AWS to deliver access to the latest AWS services.  This announcement will certainly be food for thought for Version 1 customers seeking to run efficiently in hybrid-mode.


AWS announced that it has designed its own processors for inference in machine learning applications. Andy Jassy announced AWS Inferentia during his Wednesday morning keynote.

AWS Inferentia is a machine learning inference chip designed to deliver high performance at low cost. AWS Inferentia will support the TensorFlow, Apache MXNet, and PyTorch deep learning frameworks, as well as models that use the ONNX format.

Jassy confirmed that it will be available on all the EC2 instance types as well as in SageMaker, and that it will be compatible with Elastic Inference.


There was a heavy focus on serverless computing in Thursday’s keynote. AWS launched Lambda in 2015, helping to popularise serverless computing. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.

Interesting news from the keynote was that Lambda now supports custom runtimes via Lambda Runtime API – added so far are: C++ and Rust, both AWS-provided; and from others there are Erlang, Elexir, COBOL and PHP. Lambda also now supports a Ruby runtime natively.

Lambda Layers has been added to support sharing code or data between Lambda functions, so no longer need to package anything that could be shared alongside each Lambda function individually, something that our Infrastructure Development Consultants are certainly pleased to see.


Also relevant in Lambda news, AWS Firecracker was announced at re:Invent. Firecracker is a new virtualisation technology that makes use of KVM. You can launch lightweight micro-virtual machines (microVMs) in non-virtualised environments in a fraction of a second, taking advantage of the security and workload isolation provided by traditional VMs and the resource efficiency that comes along with containers. Our consultants will be exploring the potential of Firecracker for our enterprise customers and reporting more on this.


The AWS Serverless Application Model (AWS SAM) now supports defining and deploying nested applications taken from the AWS Serverless Application Repository.


API Gateway now supports websockets, which will make developing single page applications, live content updates and more as serverless a more attractive option.

Lambda functions can now be put behind Application Load Balancers to serve HTTP/HTTPS. So this means we are no longer limited to needing to invoke the functions via API Gateway to expose the function to web traffic.

We believe this feature will open up a lot of opportunities for even more use of serverless, and our consultants will be keen to use this to its full potential.


The new Amazon EC2 A1 instances could deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerised microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. A1 instances are the first EC2 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS. This is relevant for some of our customers who run their workloads on ARM and if the instances are going to have the performance they require as it would mean significant cost reduction.


As always, there is a vast array of news and annoucements unfolding from AWS re:Invent and Version 1 will continue to explore these options, services and features with our customers as they are brought to market. Until next year, stay tuned to our blog for more AWS updates and significant news from Version 1. If you missed our posts earlier this week from re:Invent, read more here:

About Version 1 – AWS Premier Partner

Experts in migrating and running complex enterprise applications in public cloud.

Version 1 is a leader in Enterprise Cloud services and was one of the first Amazon Web Services (AWS) Consulting Partners in EuropeWe have a policy of continuous investment in technology solutions that benefit our customers. Version 1 is among the small number of Amazon Web Services Partners to have achieved advanced partner status and competency in Oracle solutions. We are committed to skills accreditation and training for our consultants and have been recognised with the 50+ certified badge. This assures customers of optimal best practice cloud implementations and we are recommended by AWS to execute AWS Well Architected Reviews on their behalf.