Title
stringlengths 18
136
| Content
stringlengths 293
255k
| Category
stringclasses 1
value | Role
stringclasses 1
value | Whitepaper
stringclasses 1
value |
|---|---|---|---|---|
Import_Windows_Server_to_Amazon_EC2_with_PowerShell
|
Import Windows Server to Amazon EC2 with PowerShell February 2017 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: http://awsamazoncom/whitepapers Archived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Amazon EC2 1 Amazon EC2 Dedicated Instances 1 Amazon EC2 Dedicated Hosts 1 AWS Server Migration Service 2 VM Import/Export 2 AWS Tools for Windows PowerShell 3 AWS Config 3 Licensing Considerations 3 Preparing for the Walkthroughs 5 Overview 5 Prerequisites 5 Walkthrough: Import Your Custom Image 6 Walkthrough: Launch a Dedicated Instance 9 Walkthrough: Configure Microsoft KMS for BYOL 11 Walkthrough: Allocate a Dedicated Host and Launch an Instance 13 Conclusion 16 Contributors 16 Further Reading 16 Archived Abstract This whitepaper is for Microsoft Windows IT professionals who want to learn how to use Amazon Web Services (AWS) VM Import/Export to import custom Windows Server images into Amazon Elastic Compute Cloud (Amazon EC2) PowerShell code is provided to demonstrate one way you could automate the task of importing images and launching instances but there are many other DevOps automation techniques that could come into play in a well thoughtout cloud migration process ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 1 Introduction Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud Amazon EC2 reduces the time required to obtain and boot new server instances It changes the economics of computing by allowing you to pay only for capacity that you actually use You have full administrator access to each EC2 instance and you can interact with your instances just as you do with your onpremises servers You can stop your instance and retain the data on your boot partition then restart the same instance using PowerShell or a browser interface Amazon EC2 Dedicated Instances Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer Your Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts However Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances Dedicated Instances allow you to bring your own licenses for Windows Server For more information see http://awsamazoncom/dedicatedinstances Amazon EC2 Dedicated Hosts An Amazon EC2 Dedicated Host is a physical server with Amazon EC2 instance capacity fully dedicated to your use Dedicated Hosts can help you address compliance requirements and reduce costs by allowing you to use your existing serverbound software licenses Dedicated Hosts allow you to allocate a physical server and then launch one or more Amazon EC2 instances of a given type on it You can target and reuse specific physical servers and be within the terms of your existing software licenses In addition to allowing you to Bring Your Own License (BYOL) to the cloud to reduce costs Amazon EC2 Dedicated Hosts can help you to meet stringent ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 2 compliance and regulatory requirements some of which require control and visibility over instance placement at the physical host level In these environments detailed auditing of changes is also crucial You can use the AWS Config service to record all changes to your Dedicated Hosts and instances Dedicated Hosts allow you to use your existing persocket percore or per virtual machine ( VM) software licenses including Microsoft Windows Server and Microsoft SQL Server Learn more at https://awsamazoncom/ec2/dedicated hosts/ AWS Server Migration Service AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of onpremises workloads to AWS AWS SMS allows you to automate schedule and track incremental replications of live server volumes making it easier for you to coordinate large scale server migrations Each server volume replicated is saved as a new Amazon Machine Image (AMI) which can be launched as an EC2 instance in the AWS Cloud AWS SMS currently supports VMware virtual machines and support for other physical servers and hypervisors is coming soon AWS SMS supports migrating Windows Server 2003 2008 2012 and 2016 and Windows 7 8 and 10 VM Import/Export VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your onpremises environment This allows you to use your existing virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon Simple Storage Service (Amazon S3) ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 3 You can use PowerShell to import a Hyper V or VMware image VM Import will convert your virtual machine (VM) into an Amazon EC2 AMI which you can use to run Amazon EC2 instances AWS Tools for Windows PowerShell The AWS Tools for Windows PowerShell are a set of PowerShell cmdlets that are built on top of the functionality exposed by the AWS SDK for NET AWS Tools for Windows PowerShell enable you to script operations on your AWS resources from the PowerShell command line Although the cmdlets are implemented using the service clients and methods from the SDK the cmdlets provide an idiomatic PowerShell experience for specifying parameters and handling results For example the cmdlets for Tools for Windows PowerShell support PowerShell pipelining —that is you can pipeline PowerShell objects both into and out of the cmdlets Learn more at https://awsamazoncom/documentation/powershell/ AWS Config AWS Config is a fully managed service that provides you an inventory of your AWS resources as well as configuration history and configuration change notifications to enable security and governance Config Rules enable you to automatically check the configuration of your AWS resources You can discover existing and deleted AWS resources determine your overall compliance against rules and dive into configuration details of a resource at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting This enables you to manage your Windows Server licenses on Dedicated Hosts as required by Microsoft Licensing Considerations Organizations that own Microsoft software licenses and Software Assurance have the option of bringing their own licenses (BYOL) to the cloud under the terms of Microsoft’s License Mobility program (included with Software Assurance) In many cases software license costs can dominate the cost of the computing storage and networking infrastructure in the cloud so BYOL can be very beneficial However you must evaluate BYOL carefully ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 4 For Windows Server and SQL Server AWS also offers License Included (LI) as an option It’s called License Included because the software is preinstalled in the AMI and the complete software licenses are included when you launch an Amazon EC2 instance with those AMIs even Client Access Licenses (CALs) You pay as you go for the Windows Server and SQL Server licenses either hourly while the instance is running or with a 1 or 3year Reserved Instance Reserved Instances offer substantial discounts The LI model is convenient and flexible but if you move a licensed onpremises workload to the cloud with LI instances then you would essentially be paying for dual software licenses Even though that sounds expensive it still might make sense to do in some cases particularly if you plan to consolidate some of your workloads or replatform some application servers or discontinue purchasing Software Assurance So you need to consider your options including BYOL carefully However don’t assume that BYOL is always more economical It’s advisable to create a simple spreadsheet to make a balanced comparison of BYOL vs LI With BYOL if you haven’t bought the licenses yet you need to know your Microsoft reseller bulk license discount You also need to include the cost of Software Assurance (even if it’s already a sunk cost consider whether you plan to renew it) and the cost of EC2 Dedicated Hosts and I nstances Don’t forget to include the correct number of licenses for all the cores on the instances you plan to use for Windows Server and SQL Server With LI you need to consider whether you are purchasing Reserved Instances which offer substantial discounts Tip: When using the AWS Simple Monthly Calculator to determine instance costs without licenses select Amazon Linux even though you’ll be importing your own Windows Server image This avoids the license cost that the calculator automatically assumes for Windows Server Also there are considerable advantages with LI: The licenses are fully managed by AWS so you don’t need to worry about auditing You can forego the cost of Software Assurance for those licenses ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 5 You don’t need to buy CALs Each LI for Windows Server includes two Remote Desktop CALs LI reduces your costs if you decide to consolidate workloads later LI reduce your costs when you stop the instances LI reduces your costs if you don’t need the full capacity of a Dedicated Host You retain the freedom to replatform your workload Preparing for the Walkthroughs Overview The remainder of this paper walks you through several activities with Windows PowerShell You can adapt and reuse these code snippets in your own AWS account to automate the following tasks: Import a Windows Server virtual machine to Amazon EC2 Launch and terminate a Dedicated Instance using your custom AMI Configure Microsoft Key Management Services (KMS) to apply user supplied licensing Allocate a Dedicated Host and launch an instance in the host using your custom AMI and then terminate the instance and the Dedicated Host Important: If you choose to follow along with the remaining sections in this paper you will be creating resources in your AWS account which will incur billing charges Prerequisites These walkthroughs assume that you have previously exported a Windows Server image (for example from VMware as an Open Virtualization Archive or OVA file) and stored it in an Amazon S3 bucket in your account VM Import/Export also supports Microsoft HyperV but an OVA is referenced here as an example ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 6 You’ll need to have the AWS Tools for Windows PowerShell and grant security rights for PowerShell to access your AWS account The easiest way to do that is to launch a Windows Server instance in Amazon EC2 with an AWS Identity and Access Management (IAM) role You’ll also need an Amazon Virtual Private Cloud VPC a subnet a security group and a keypair in the Region where you import the image You certainly can create those in PowerShell but it’s gener ally more reliable to create as much of your infrastructure as possible using AWS CloudFormation The reason is that you need to consider how to roll back your stack in case any errors occur while building it AWS CloudFormation provides a simple mechanism to automatically roll back so that you won’ t be left paying the bill for an incomplete stack after an error occurs To roll back in PowerShell you would need to trap potential errors at the point where each resource is created in your script and then write the code to remove or deallocate every other resource that the script had successfully created up to that point That would get very tedious in regular PowerShell but could be more easily handled with PowerShell Desired State Configuration (DSC) To comply with your Windows Server license terms and implement BYOL you’ll need to a have a Microsoft KMS instance running in your VPC The walkthrough shows you how to configure the BYOL instance for Microsoft KMS though you can proceed with this walkthrough without having Microsoft KMS running Finally these walkthroughs assume that your own workstation is running Windows Server 2016 though these steps should work with other versions with minor modifications Walkthrough: Import Your Custom Image 1 On the Windows Start menu choose Windows PowerShell ISE 2 In the Windows PowerShell ISE press Ctrl+R to show the Script Pane (or on the View menu choose Show Script Pane ) 3 The AWS Tools for PowerShell allow you to specify the AWS Region separately i n most cmdlets but it’s simpler to set the default Region for your whole session For example run the following commands in PowerShell t o set “uswest2 ” as the default Region You’ll be using the ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 7 “lab_region ” variable again later in this walkthrough so make sure you set it here to your preferred Region $lab_region = "uswest2" SetDefaultAWSRegion $lab_region 4 To use the VM import service role in your own AWS account create an IAM policy document to grant access for the Amazon EC2 I mport API (vmieamazonawscom) You must name the role “ vmimport ” (Note: you could create this policy in the AWS Management Console but this example shows how to do it with a document in PowerShe ll) $importPolicyDocument=@" { "Version":"2012 1017" "Statement":[ { "Sid":"" "Effect":"Allow" "Principal":{ "Service":"vmieamazonawscom" } "Action":"sts:AssumeRole" "Condition":{ "StringEquals":{ "sts:ExternalId":"vmimport" } } } ] } "@ NewIAMRole RoleName vmimport AssumeRolePolicyDocument $importPolicyDocument 5 Associate a policy with the “vmimport ” role so that VM Import/Export can access the VM image in your S3 bucket and create an AMI in Amazon EC2 If you’d like to create your own restrictive policy for security reasons see this page for guidance: http://docsawsamazoncom/vmimport/latest/userguide/import vm imagehtml AWS also provides a couple of managed (builtin) policies ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 8 that make it convenient to grant access to the VM import service r ole to Amazon S3 and Amazon EC2 (Note : This code consists of two commands that are wrapped to fit the document) Register IAMRolePolic y RoleName vmimport –PolicyArn arn:aws:iam::aws:policy/AmazonS3FullAccess Register IAMRolePolicy RoleName vmimport PolicyArn arn:aws:iam::aws:policy/AmazonEC2FullAccess 6 Create a userBucket object to define the location of your image file and an ImageDiskContainer parameter both of which are passed to the ImportEC2Image cmdlet However before running these commands replace <UniqueBucketName > with the name of the bucket where you stored the OVA file If you are importing HyperV change the Format property to “VHD” $userBucket = NewObject AmazonEC2ModelUserBucket $userBucketS3Bucket = "<UniqueBucketName >" $userBucketS3Key = $file $windowsContainer = NewObject AmazonEC2ModelImageDiskContainer $windowsContainerFormat = "OVA" $windowsContainerUserBucket = $userBucket 7 Now create an object for the remaining parameters for the import task Set the "Platform ” parameter to match the imported operating system type The “LicenseType ” parameter controls how the instance that is imported is configured for licensing Set it to BYOL $params=@{ "ClientToken"=" MyCustomWindows_" + (Get Date) "Description"="My custom Windo ws image" "Platform"="Windows" "LicenseType"="BYOL" } 8 Now you’re ready to start the import task When you run this command the import process will take about 45 minutes but you can proceed with the remaining steps in this paper if you’re willing to temporarily use ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 9 other AMI IDs This command is all one line but wrapped here to fit the page ImportEC2Image DiskContainer $windowsContainer @params –region $lab_region 9 You can check the progress of the import task with the followin g command which will show the Progress property and the Status property The Progress property reports the current percentage complete status for the import task The Status property indicates the migration phase GetEC2ImportImageTask region $lab_regio n Walkthrough: Launch a Dedicated Instance 1 While waiting for your own image to be imported you can follow the rest of the walkthroughs using an AWS AMI All the steps will work the same regardless of the AMI except that you’ll need to provide a key pair to access an AWS AMI When you launch an instance from your own imported AMI you don’t need to provide a key pair if you already have an Administrator password The command below obtains the AMI ID of the latest version of the AWS AMI for Windows Server 20 16 (“base” means without SQL Server) The my_ami variable will be used later so make sure you set it here If you run this step after your import process is complete you can use that AMI ID instead $my_ami = (GetEC2ImageByName "Windows_2016_Base")ImageId 2 Configure two variables for use when launching the instance Setting the instance type to "dedicated" means that you want a Dedicated Instance With the exception of the t2 instance type most instance types can be used for Dedicated Instances $tenancy_type = "dedicated" $instance_type = "m4large" ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 10 3 This step configures variables to store the networking parameters you ’ll use when you launch a new instance Enter the Classless InterDomain Routing (CIDR) address of a subnet you’v e created in your VPC where you want to launch the new instance If you don’t provide a private IP address during launch one will be assigned automatically within the subnet However you may want to script it for various reasons The NewEc2Instance cmdlet will use this private_IP address and you will log into the instance in the next walkthrough to configure Microsoft KMS If your workstation is not an EC2 instance in a public subnet in the same VPC where you are launching this instance in a private subnet then you will need to do one of the following: (a) launch the instance in a public subnet; (b) use Remote Desktop Protocol (RDP) to allow remote connections into another instance in its associated public subnet; or (c) set up a Remote Desktop Gateway in its public subnet (see Remote Desktop Gateway on the AWS Cloud: Quick Start Reference Deployment http://docsawsamazoncom/quickstart/latest/rd gateway/welcomehtml ) $private_IP = "1050310" $Subnet = "105030/24" $SubnetObj = GetEC2Subnet Filter @{Name="cidr"; Values=$Subnet} 4 Configure a variable to store the security group parameter you will use when you launch the new instance Later in this walkthrough y ou’ll login to the instance through Remote Desktop to set up KMS for BYOL so make sure the security group allows inbound RDP access from the Internet $SecurityGroup = "MySecurityGroup" $SGObj = Get EC2SecurityGroup Filter @{Name="tag value"; Values=$SecurityGroup} 5 Create a variable for the keypair name parameter you will use to decrypt the administrator password for the new instance Do n’t include the file extension PEM If you are launching an imported image on which you know the administrator password you don’t need to provide a keypair ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 11 $key_pair = "<keypairname>" 6 Now you’re ready t o launch your Dedicated Instance Many other optional parameters can be configured with this cmdlet to customize the instance However the following is the minimum you need to launch an instance with BYOL $my_instance = NewEC2Instance ` ImageId $my_ami ` Tenancy $tenancy_type ` InstanceType $instance_type ` SubnetId $SubnetObjSubnetID ` PrivateIpAddress $ private_IP ` securityGroupId $SGObjGroupID ` KeyName $ key_pair 7 It’s a good idea to c reate a name tag for the new instance The last two lines are a single cmdlet wrapped here to fit the page $Tag = NewObject amazonEC2ModelTag $TagKey = 'Name' $TagValue = "Server2016 Imported" Newec2Tag ResourceID $my_instancerunninginstance[0]instanceID Tag $Tag Walkthrough: Configure Microsoft KMS for BYOL To comply with Microsoft licensing requirements for EC2 Dedicated Instances using the BYOL model you must either supply a Windows license key for the instance or configure it to use Microsoft KMS on a server that you manage In this task you will configure the Dedicated Instance to use a manually specified Microsoft KMS You will connect to the new instance using Windows Remote Desktop Connection If you used an AWS AMI to launch this instance you need to decrypt the password using the lab keypair in order to connect If ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 12 you launched this instance using your imported image you already know the local administrator account and password 1 Log in to the AWS Management Console and go to the EC2 Dashboard 2 Select only the instance you just launched with PowerShell 3 Choose Connect 4 In the Connect To Your Instance dialog box choose Get Password You might need to retry this a couple of times to give the instance a few minutes to initialize 5 For Key Pair Path choose Choose File (the button is named Browse in some browsers) 6 Browse to the pem file on your local machi ne for the keypair you specified when launching the instance and choose Open 7 Choose Decrypt Password 8 Copy the decrypted password to your clipboard buffer 9 Run Remote Desktop Connection 10 In the Computer box enter the IP address of the Dedicated Instance you launched and choose Connect 11 When prompted for credentials log in as Administrator and paste the decrypted password from your clipboard buffer 12 On the Remote Desktop Connection warning dialog box choose Yes to ignore the verification warning 13 In the Remote Desktop Connection session for the Server2016Imported instance when the desktop appears choose No in the Networks dialog box to disable discovery (this is a Windows Server 2016 feature that is not available in earlier versions) 14 In the Remote Desktop Connection session for the Server2016Imported instance launch Windows PowerShell and run the following command t o display the current configuration settings of the Microsoft KMS client slmgrvbs /dlv ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 13 15 Enter the following commands to update the active Microsoft KMS configuration and confirm the change Replace the IP address with a functioning KMS server that you have installed in your VPC This command won’t immediately fail if you don’t have a running KMS instance at the given IP address slmgrvbs /skms 10503100 slmgrvbs /dlv 16 Close the Remote Desktop Connection to the Dedicated Instance and return to your workstation instance where you launched the instance Terminate the Dedicated Instance This cmdlet should be entered as a single line RemoveEC2Instance InstanceId $my_instanceInstances[0]InstanceId Force Walkthrough: Allocate a Dedicated Host and Launch an Instance In this task you will launch and terminate an instance in a Dedicated Host 1 Create variables for the Availability Zone and quantity parameters Edit the $AZ variable appropriately before running this command $AZ = 'uswest2a' $Qty = 1 $AutoPlace = 'On' 2 Request a Dedicated Host This reuses the $instance_type variable you created earlier which was m4large Note that Dedicated Hosts are not available for all instance types newEC2hosts ` InstanceType $ instance_type ` AvailabilityZone $AZ ` quantity $Qty ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 14 ` AutoPlacement $AutoPlace 3 Query the properties of your Dedicated Host This command may initially return no data Wait a moment and retry it This command returns the number of physical CPU cores and sockets the total number of virtual CPUs and the type of instance supported on your Dedicated Host (getEC2hosts)HostProperties 4 List the instances running on your Dedicated Host This shows that initially there are no instances running in the host (getEC2hosts)Instances 5 Specify the tenancy type "host ” to launch an instance inside the Dedicated Host $tenancy_type = "host" 6 Indicate the AMI ID to be deployed in the Dedicated Host There are Microsoft licensing restrictions for Dedicated Hosts AWS and AWS Marketplace AMIs for Windows cannot be used Ordinarily you would specify the AMI ID of your imported image here However if the import task you started earlier is still running in the background that AMI is not available yet In order to demonstrate how to deploy instances to a Dedicated Host you can use an Amazon Linux AMI as a placeholder for the next few tasks $my_ami = (Get EC2Image – Filters @{Name = "name"; Values = "Amazon_CentOS*"})ImageID 7 Launch the instance inside the Dedicated Host Once again the only difference is the requirement to provide a keypair when launching an AWS AMI ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 15 $host_instance = NewEC2Instance ` ImageId $my_ami ` Tenancy $tenancy_type ` InstanceType $ instance_type ` SubnetId $SubnetObjSubnetID ` PrivateIpAddress $ private_IP ` securityGroupId $SGObjGroupID ` KeyName $ key_pair 8 Create a name tag for the new instance The last two lines are a single cmdlet $Tag = NewObject amazonEC2ModelTag $TagKey = 'Name' $Tagvalue = "DedicatedHost Instance" Newec2Tag ResourceID $host_instancerunninginstance[0]instanceID Tag $Tag 9 List the instances running on your Dedicated Host (getEC2hosts)Instances 10 You must terminate all instances on a Dedicated Host before you can release it RemoveEC2Instance –InstanceId ` $host_instanceInstances[0]InstanceId Force 11 Finally release the Dedicated Host The command below reports successful and u nsuccessful attempts to release hosts It doesn’t report success until all running instances have been terminated Repeat this command until your hostid is listed in the Successful column $dedicated_host = getEC2hosts | Select Object first 1 RemoveEC2Hosts HostId $dedicated_hostHostId –Force ArchivedAmazon Web Services – Import Windows Server to Amazon EC2 with PowerShell Page 16 12 Switch back to the EC2 Dashboard in your browser In the navigation pane choose Dedicated Hosts to confirm that DedicatedHost Instance has been terminated You might need to refresh the console display Conclusion This paper has demonstrated how to use Windows PowerShell and VM Import/Export to import a custom Windows Server image into Amazon EC2 You can adapt and reuse the PowerShell code snippets to automate the process in your own AWS account In addition to VM Import/Export consider using the AWS Server Migration Service It currently supports VMware vCenter and support for additional image formats is coming soon Contributors The following individuals and organizations contributed to this document: Scott Zimmerman Solutions Architect AWS Further Reading For additional information please consult the following sources: Getting Started with Amazon EC2 Windows Instances http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win _GetStartedhtml Archived
|
General
|
consultant
|
Best Practices
|
Infrastructure_as_Code
|
ArchivedInfrastructure as Code July 2017 This paper has been archived For the latest technical c ontent about the A WS Cloud see the AWS Whitepaper s & Guides page: https://awsamaz oncom/whitepapersArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction to Infrastructure as Code 1 The Infrastructure Resource Lifecycle 1 Resource Provisioning 3 AWS CloudFormation 4 Summary 9 Configuration Management 10 Amazon EC2 Systems Manager 10 AWS OpsWorks for Chef Automate 14 Summary 17 Monitoring and Performance 18 Amazon CloudWatch 18 Summary 21 Governance and Compliance 21 AWS Config 22 AWS Config Rules 23 Summary 25 Resource Optimization 25 AWS Trusted Advisor 26 Summary 27 Next Steps 28 Conclusion 28 Contributors 30 Resources 30 Archived Abstract Infrastructure as Code h as emerged as a best practice for automating the provisioning of infrastructure services This paper describes the benefits of Infrastructure as C ode and how to leverage the capabilities of Amazon Web Services in this realm to support DevOps initiatives DevOps is the combination of cultural philosophies practices and tools that increases your organization’s ability to deliver applications and services at high velocity This enables your organization to be more respons ive to the needs of your customers The practice of Infrastructure as Code can be a catalyst that makes attaining such a velocity possible ArchivedAmazon Web Services – Infrastructure as Code Page 1 Introduction to Infrastructure as Code Infrastructure management is a process associated with software engineering Organizations have traditionally “racked and stacked” hardware and then installed and configured operating systems and applications to support their technology needs Cloud computing takes advantage of virtualization to enable the ondemand provisioning of compute net work and storage re sources that constitute technology infrastructures Infrastructure managers have often performed such provisioning manually The manual proc esses have certain disadvantages including : • Higher cost because they requir e human capital that could otherwise go toward more important business needs • Inconsistency due to human error leading to deviations from configuration standards • Lack of a gility by limit ing the speed at which your organization can release new versions of services in respons e to customer needs and market drivers • Difficulty in attaining and maintaining compliance to corporate or industry standards due to the absence of repeatable processes Infrastructure as Code addresses these deficiencies by bringing automation to the provisioning process Rather than relying on manually performed steps both administrators and developers can instantiate infrastructure using configuration files Infrastructure as C ode treats these configuration file s as software code These files can be used to produce a set of artifacts namely the compute storage network and application services that comprise an operating environment Infrastructure as Code eliminates configuration drift through automation thereby increasing the speed and agility of infrastructure deployments The Infrastructure Resource Lifecycle In the previous section we presented Infrastructure as Code as a way of provisioning resources in a repeatable and consistent manner The underlying concepts are also relevant to the broader roles of infrastructure technology operations Consider the following diagram ArchivedAmazon Web Services – Infrastructure as Code Page 2 Figure 1 : Infrastructure r esource lifecycle Figure 1 illustrates a common view of the lifecycle of infrastructure resources in an organization The stages of the lifecycle are as follows: 1 Resource provisioning Administrators provision the resources according to the specifications they want 2 Configuration management The resources become components of a configuration management system that supports activities such as tuning and patching 3 Monitoring and performance Monitoring and performance tools validate the operational status of the resources by examining items such as metrics synthetic transactions and log files 4 Compliance and governance Compliance and governance frameworks drive additional validation to ensure alignment with corporate and industry standards as well as regulatory requirements ArchivedAmazon Web Services – Infrastructure as Code Page 3 5 Resource optimization Administrators review performance data and identify changes needed to optimize the environment around criteria such as performance and cost management Each stage involve s procedures that can leverage code This extends the benefits of Infrastructure as Code from its traditional role in provisioning to the entire resource lifecycle Every lifecycle then benefits from the consistency and repeatability that Infrastructure as Code offers This expanded view of Infrastructure as Code results in a higher degree of maturity in the Information Technology (IT) organization as a whole In the following sections we explore each stage of the lifecycle – provisioning configuration management monitoring and performance governance and compliance and optimization We will consider the various tasks associated with each stage and discuss how to accomplish those tasks using the capabilities of Amazon Web Services (AWS) Resource Provisioning The information resource lifecycle begins with resource provisioning Administrators can use the principle of Infrastructure a s Code to streamline the provisioning process Consider the following situations : • A release manager needs to build a replica of a cloud based production environment for disaster recovery purposes The administrator designs a template that models the production environment and provision s identical infrastructure in the disaster recovery location • A university professor wants to provision resources for classes each semester The students in the class need an environment that contains the appropriate too ls for their studies The professor creates a template with the appropriate infrastructure components and then instantiate s the template resources for each student as needed • A service that has to meet certain industry protection standards requires infras tructure with a set of security controls each time the service is installed The security administrator integrates the security controls into the configuration template so that the security controls are instantiated with the infrastructure ArchivedAmazon Web Services – Infrastructure as Code Page 4 • The manager of a software project team needs to provide development environments for programmers that include the necessary tools and the ability to interface with a continuous integration platform The manager creates a template of the resources and publishes the template in a resource catalog This enables the team members to provision their own environments as needed These situations have one thing in common: the need for a repeatable process for instantiating resources consistently Infrastructure as Code provides th e framework for such a process To address this need AWS offers AWS CloudFormation 1 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create manage provision and update a collection of related AWS resources in an orderly and predictable way AWS CloudFormation uses templates written in JSON or YAML format to describe the collection of AWS resources (known as a stack ) their associated dependencies and any required runtime parameters You can use a template repeatedly to create identical copies of the same stack consistently across AWS Regions After deploying the resources you can modify and update them in a controlled and predictable way In effect you are applying version control to your AWS infrastructure the sa me way you do with your application code Template Anatomy Figure 2 shows a basic AWS CloudFormation YAML formatted template fragment Templates contain parameters resource declaration and outputs Templates can reference the outputs of other templates which enables modularization ArchivedAmazon Web Services – Infrastructure as Code Page 5 AWSTemplateFormatVersion: "version date" Description: String Parameters: set of parameters Mappings: set of mappings Conditions: set of conditions Transform: set of transforms Resources: set of resources Outputs: set of outputs Figure 2 : Structure of a n AWS CloudFormation YAML template Figure 3 is an example of a n AWS CloudFormation template The template requests the name of an Amazon Elastic Compute Cloud (EC2 ) key pair from the user in the parameters section 2 The resources section of the template then creates an EC2 instance using that key pair with an EC2 security group that enables HTTP ( TCP port 80) access ArchivedAmazon Web Services – Infrastructure as Code Page 6 Parameters: KeyName: Description: The EC2 key pair to allow SSH access to the instance Type: AWS::EC2::KeyPair::KeyName Resources: Ec2Instance: Type: AWS::EC2::Instance Properties: SecurityGroups: !Ref InstanceSecurityGroup KeyName: !Ref KeyName ImageId: ami 70065467 InstanceSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Enable HTTP access via port 80 SecurityGroupIngress: IpProtocol: tcp FromPort: '80' ToPort: '80' CidrIp: 0000/0 Figure 3 : Example of a n AWS CloudFormation YAML template Change Sets You can update AWS CloudFormation templates with application source code to add modify or delete stack resources The change sets feature enables you to preview proposed changes to a stack without performing the associated updates 3 You can control t he ability to create and view change sets using AWS Identity and Access Management (IAM) 4 You can allow some developers to create and preview change sets while reserving the ability to update stacks or execute change sets to a select few For example you could allow a developer to see the impact of a template change before promoting that change to the testing stage There are three pri mary phases associated with the use of change sets 1 Create the change set ArchivedAmazon Web Services – Infrastructure as Code Page 7 To create a change set for a stack submit the changes to the template or parameters to AWS CloudFormation AWS CloudFormation generates a change set by comparing the current stack with your changes 2 View the change set You can use the AWS CloudFormation console AWS CLI or AWS CloudFormation API to view change sets The AWS CloudFormation console provides a summary of the changes and a detailed list of changes in JSON format The AWS CLI and AWS CloudFormation API return a detailed list of changes in JSON format 3 Execute the change set You can select and execute the change set in the AWS CloudFormation console use the aws cloudformation executechangeset command in the AWS CLI or the ExecuteChangeSet API The change sets capability allow s you to go beyond version control in AWS CloudFormation by enabling you to keep track of what will actually change from one version to the next Developers and administrators can gain more insight into the impact of changes before promoting them and minimize the risk of introducing errors Reusable Templates Many programming languages offer ways to modularize code with constructs such as functions and subroutines Similarly AWS CloudFormation offers multiple ways to manage and organize your stacks Although you can maintain all your resourc es within a single stack large single stack templates can become difficult to manage There is also a greater possibility of encountering a number of AWS CloudFormation limits 5 When designing the architecture of your AWS CloudFormation stacks you can group the stacks logically by function Instead of creating a single template that includes all the res ources you need such as virtual private clouds ( VPCs ) subnets and security gro ups you can use nested stacks or cross stack references 6 7 The nested stack feature allows you to create a new AWS CloudFormation stack resource within a n AWS CloudFormation template and establish a parent child relationship between the two stacks Each time you create an AWS ArchivedAmazon Web Services – Infrastructure as Code Page 8 CloudFormation stack from the parent template AWS CloudFormation also creates a new child stack This approach allows you to share infrastructure code across projects while maintaining completely separate stacks for each project Cross stack references enable an AWS CloudFormation stack to export values that other AWS CloudFormation stacks can then import Cross stack references promote a service oriented model with loose coupling that allow s you to share a single set of resources across multiple projects Template Linting As with application code AWS CloudFormation templates should go through some form of static analysis also known as linting The goal of linting is to determine whether the code is syntactically correct identify potent ial errors and evaluate adherence to specific style guidelines In AWS CloudFormation linting validates that a template is correctly written in either JSON or YAML AWS CloudFormation provides the ValidateTemplate API that checks for proper JSON or YAML syntax 8 If the check fail s AWS CloudFormation returns a template validation error For example you can run the following command to validate a template stored in Amazon Simple Storage Service ( Amazon S3 ): 9 aws cloudformation validatetemplate templateurl \ s3://examplebucket/example_templatetemplate You can also use third party validation tools For example cfnnag performs additional evaluations on templates to look for potential security concerns Another tool cfncheck performs deeper ch ecks on resource specifications to identify potential errors before they emerge during stack creation 10 11 Best Practices The AWS CloudFormation User Guide provides a list of best practices for designing and implementing AWS CloudFormation templates 12 We provide links to these practices below Planning and organizing • Organize Your Stacks By Lifecycle and Ownership13 • Use IAM to Control Access14 ArchivedAmazon Web Services – Infrastructure as Code Page 9 • Reuse Tem plates to Replicate Stacks in Multiple Environments15 • Use Nested Stacks to Reuse Common Template Patterns16 • Use Cross Stack References to Export Shared Resources17 Creating templates • Do Not Embed Credentials in Your Templates18 • Use AWS Specific Parameter Types19 • Use Parameter Constraints20 • Use AWS::CloudFormation::Init to Deploy Software Applications on Amazon EC2 Instances21 • Use the Latest Helper Scripts22 • Validate Templates Before Using Them23 • Use Parameter Store to Centrally Manage Parameters in Your Templates24 Managing stacks • Manage All Stack Resources Through AWS CloudFormation25 • Create Change Sets Before Updating Your Stacks26 • Use Stack Policies27 • Use AWS CloudTrail to Log AWS CloudFormation Calls28 • Use Code Reviews and Revision Controls to Manage Your Templates29 • Update Your Amazon EC2 Linux Instances Regularly30 Summary The information resource lifecycle starts with the provisioning of resources AWS CloudFormation provides a template based way of creating infrastructure and managing the dependencies between resources during the creation process With AWS CloudFormation you can maintain your infrastructure just like application source code ArchivedAmazon Web Services – Infrastructure as Code Page 10 Configuration Management Once you provision your infrastructure resources and that infrastructure is up and running you must address the ongoing configuration management needs of the environment Consider the following situations : • A release manager wants to deploy a version of an application across a group of servers and perform a rollback if there are problems • A system administrator receives a request to install a new operating system package in developer environments but leave the other environments untouched • An application administrator needs to periodically update a configuration file across all servers housing an application One way to address these situations is to return to the provisioning stage provision fresh resources with the required changes and dispose of the old resources This appr oach also known as infrastructure immutability ensures that the provisioned resources are built anew according to the code base line each time a change is made This eliminates configuration drift There are times however when you might want to take a different approach In environments that have high levels of durability it might be preferable to have ways to make incremental changes to the current resources instead of reprovisioning them To address this need AWS offers Amazon EC2 Systems Manager and AWS OpsWorks for Chef Automate 31 32 Amazon EC2 Systems Manager Amazon EC2 Systems Manager is a collection of capabilities that simplifies common maintenance management deployment and execution of operational tasks on EC2 instances and servers or virtual machines ( VMs ) in on premises environments Systems Manager helps you easily understand and control the current state of your EC2 instance and OS configurations You can track and remotely manage system configuration OS patch levels application configurations and other details about deployments as they occur over time These capabilities h elp with automating complex and repetitive tasks defining system configurations preventing drift and maintaining software compliance of both Amazon EC2 and on premises configurations ArchivedAmazon Web Services – Infrastructure as Code Page 11 Table 1 lists the tasks that Systems Manager simplifies Tasks Details Run Command33 Manage the configuration of managed instances at scale by distributing commands across a fleet Inventory34 Automate the collection of the software inventory from managed instances State Manager35 Keep managed instances in a defined and consistent state Maintenance Window36 Define a maintenance window for running administrative tasks Patch Manager37 Deploy software patches automatically across groups of instances Automation38 Perform common maintenance and deployment tasks such as updating Amazon Machine Images ( AMIs ) Parameter Store39 Store control access and retrieve configuration data whether plain text data such as database strings or secrets such as passwords encrypted through AWS Key Management System (KMS ) Table 1: Amazon EC2 Systems Manager tasks Document Structure A Systems Manager document defines the actions that Systems Manager performs on your managed instances Systems Manager includes more than a dozen p reconfigured documents to support the capabilities listed in Table 1 You can also create custom version controlled documents to augment the capabilities of Systems Manager You can set a default version and share it across AWS accounts Steps in the document specify the execution order All documents are written in JSON and include both parameters and actions As with AWS OpsWorks for Chef Automate documents for Systems Manager become part of the infrastructure code base bringing Infrastructure as Code to configuration management The following is a n example of a custom document for a Windows based host The document uses the ipconfig command to gather the network configuration of the node and then installs MySQL { "schemaVersion": "20" "description": "Sample version 20 document v2" "parameters": {} ArchivedAmazon Web Services – Infrastructure as Code Page 12 "mainSteps": [ { "action": "aws:runPowerShellScript" "name": "runShellScript" "inputs": { "runCommand": ["ipconfig"] } } { "action": "aws:applications" "name": "installapp" "inputs": { "action": "Install" "source": "http://devmysqlcom/get/Downloads/MySQLInstaller/mysql installer community 56220msi" } } ] } Figure 4 : Example of a Systems Manager document Best Practices The best practices for each of the Systems Manager capabilities appear below Run Command • Improve your security posture by leveraging Run Command to access your EC2 instances instead of SSH/RDP 40 • Audit all API calls made by or on behalf of R un Command using AWS CloudTrail • Use the rate control feature in Run Command to perform a staged command execution 41 • Use fine grained access permissions for Run Command (and all Systems Manager capabilities ) by using AWS Identity and Access Management (IAM) policies 42 ArchivedAmazon Web Services – Infrastructure as Code Page 13 Inventory • Use Inventory in combination with AWS Config to audit your application configuration overtime State Manager • Update the SSM agent periodically (at least once a month) using the preconfigured AWS UpdateSSMAgent document 43 • Bootstrap EC2 instances on launch using EC2Config for Windows 44 • (Specific to Windows) Upload the PowerShell or Desired State Configuration (DSC ) module to Amazon S3 and use AWS InstallPowerShellModule • Use tags to create application groups Then target instances using the Targets parameter instead of specifying individual instance IDs • Automatically remediate findings generated by Amazon Inspector by using Systems Manager 45 • Use a centralized configuration repository for all of your Systems Manager documents and share documents across your organization 46 Maintenance Windows • Define a schedule for performing disruptive actions on your instances such as OS patching driver updates or software installs Patch Manager • Use Patch Manager to roll out patches at scale and to increase fleet compliance visibility across your EC2 instances Automation • Create self serviceable runbooks for infrastructure as Automation documents • Use Automation to simplify creating AMIs from the AWS Marketplace or custom AMIs using public documents or authoring your own workflows ArchivedAmazon Web Services – Infrastructure as Code Page 14 • Use the documents AWS UpdateLinuxAmi or AWS UpdateWindowsAmi or create a custom Automation document to build and maintain images Parameter Store • Use Parameter Store to manage g lobal configuration settings in a centralized manner 47 • Use Parameter Store for secrets managements encrypted through AWS KMS 48 • Use Parameter Store with Amazon EC2 Container Service (ECS) task definitions to store secrets 49 AWS OpsWorks for Chef Automate AWS OpsWorks for Chef Automate brings the capabilities of Chef a configuration management platform to AWS OpsWorks for Chef Automate further builds on Chef’s capabilities by providing additional features that support DevOps capabilities at scale Chef is based on the concept of recipes configuration scripts written in the Ruby language that perform tasks such as installing services Chef recipes like AWS CloudFormation templates are a form of source code that can be version controlled thereby extending the principle of Infrastructure as C ode to the configuration management stage of the resource lifecycle OpsWorks for Chef Automate expands the capabilities of C hef to enable your organization to implement DevOps at scale OpsWorks for Chef Automate provides three key capabilities that you can configure to support DevOps practices : workflow compliance and visibility Workflow You can use a workflow in OpsWorks for Chef Automate to coordinate development test and deployment The workflow includes quality gates that enable users with the appropriate privileges to promote code between phases of the release management process This capability can be very useful in supporting collaboration between teams Each team can implement its ow n gates to ensure compatibility between the projects of each team ArchivedAmazon Web Services – Infrastructure as Code Page 15 Compliance OpsWorks for Chef Automate provides features that can assist you with organizational compliance as part of configuration management Chef Automate can provide reports that highlight matters associat ed with compliance and risk You can also leverage p rofiles from well known groups such as the Center for Internet Security (CIS) Visibility OpsWorks for Chef Automate provides visibility into the state of workflows and compliance within projects A Chef user can create and view dashboards that provide information about related events and query the events through a user interface Recipe Anatomy A Chef recipe consists of a set of resource definitions The definition s describe the desired state of the resources and how Chef can bring them to that state Chef supports over 60 resource types A list of common resource types appears below Resource Name Purpose Bash Execute a script using the bash interpreter Directory Manage directories Execute Execute a single command File Manage files Git Manage source resources in Git repositories Group Manage groups Package Manage packages Route Manage a Linux route table entry Service Manage a service User Manage users Table 2: Common Chef resources The following is an example of a Chef recipe This example defines a resource based on the installation of the Apache web server The resource definition includes a check for the underlying operating system It uses the case operator to examine the value of node[:platform] to check for the underlying ArchivedAmazon Web Services – Infrastructure as Code Page 16 operating system The action: install directive brings the resource to the desired state (that is it installs the package) package 'apache2' do case node[:platform] when 'centos''redhat''fedora''amazon' package_name 'httpd' when 'debian''ubuntu' package_name 'apache2' end action :install end Figure 5 : Example of a Chef recipe Recipe Linting and Testing A variety of tools is available from both Chef and the Chef user community that support linting (syntax checking ) and unit and integration testing We highlight some of the most common platforms in the following sections Linting with Rubocop and Foodcritic Linting can be done on infrastructure code such as Chef recipes using tools such as Rubocop and Foodcritic 50 51 52 Rubocop performs static analysis on Chef recipes based on the Ruby style guide (Ruby is the language used to create Chef recipes ) This tool is part of the Chef Development Kit and can be integrated into the software development workflow Foodcritic checks Chef recipes for common syntax errors based on a set of built in rules which can be extended by community contributions Unit Testing with ChefSpec ChefSpec can provide unit testing on Chef cookbooks 53 These tests can determine whether Chef is being asked to do the appropriate tasks to accomplish the desired goals ChefSpec requires a configuration test specification that is then evaluated against a recipe For example ChefS pec would not actually check whether Chef installed the Apache package but instead checks whether a Chef recipe asked to install Apache The goal of the test is to validate whether the recipe reflects the intentions of the programmer ArchivedAmazon Web Services – Infrastructure as Code Page 17 Integration Testing with Test Kitchen Test Kitchen is a testing platform that creates test environments and then use s bussers which are test frameworks to validate the creation of the resources specified in the Chef recipes 54 By leveraging the previous testing tools in conjunction with OpsWorks for Chef Automate workflow capabilities developers can automate the testing of their infrastructures during the development lifecycle These tests are a form of code themselves and are another key part of the Infrastructure as Code approach to deployments Best Practices The strategies techniques and suggestions presented here will help you get the maximum benefit and optimal outcomes from AWS OpsWorks for Chef Automate : • Consider storing your Chef recipes in an Amazon S3 archive Amazon S3 is highly reliable and durable Explicitly version each archive file by using a naming convention Or use Amazon S3 versioning which provides an audit trail and an easy way to revert to an earlier version • Establish a backup schedule that meets your organizational governance requirem ents • Use IAM to limit access to the OpsWorks for Chef Automate API calls Summary Amazon EC2 Systems Manager lets you deploy c ustomize enforce and audit an expected state configuration to your EC2 instances and servers or VMs in your onpremises environment AWS OpsWorks for Chef Automate enables you to use Chef recipes to support the configuration of an environment You can us e OpsWorks for Chef Automate independently or on top of an environment provisioned by AWS CloudFormation The run documents and policies associated with Systems Manager and the recipes associated with OpsWorks for Chef Automate can become part of the infrastructure code base and be controlled just like application source code ArchivedAmazon Web Services – Infrastructure as Code Page 18 Monitoring and Performance Having reviewed the role of Infrastructure as Code in the provisioning of infrastructure resources and configuration management we now look at infrast ructure health Consider how the following events could affect the operation of a web site during periods of peak demand: • Users of a web application are experiencing timeouts because of latency of the load balancer making it difficult to browse the product catalogs • An application server experiences performance degradation due to insufficient CPU capacity and can no longer process new orders • A database that track s session state doesn’t have enough throughput This causes delays as users transition through the various stages of an application These situations describe operational problems arising from infrastructure resources that don’t meet their performance expectations It’s important to capture key metrics to as sess the health of the environment and take corrective action when problems arise Metrics provide visibility With metrics your organization can respond automatically to events Without metrics your organization is blind to what is happening in its infrastructure thereby requiring human intervention to address all issues With scalable and loosely coupled systems written in multiple languages and frameworks it can be difficult to capture the relevant metrics and logs and respond accordingly To address this need AWS offers the Amazon CloudW atch services 55 Amazon CloudWatch Amazon CloudWatch is a set of service s that ingests interprets and responds to runtime metrics logs and events CloudWatch automatically collects metrics from many AWS services such as Amazon EC2 Elastic Load Balancing (ELB) and Amazon DynamoDB56 57 58 Responses can include built in actions such as sending notifications or custom actions handled by AWS Lambda a serverless event driven compute platform 59 The code for Lambda functions becomes part of the infrastructure code base thereby extending Infrastructure as Code to the operational level CloudWatch consists of three services: the main CloudWatch service Amazon CloudWatch Logs and Amazon CloudWatch Events We now consider each of these in more detail ArchivedAmazon Web Services – Infrastructure as Code Page 19 Amazon CloudWatch The main Amazon CloudWatch service collects and tracks metrics for many AWS services such as Amazon EC2 ELB DynamoDB and Amazon Relational Database Service ( RDS ) You can also cre ate custom metrics for services you develop such as applications CloudWatch issues alarms when metrics reach a given threshold over a period of time Here are some examples of metrics and potential responses that could apply to the situations mentioned at the start of this section : • If the latency of ELB exceeds five seconds over two minutes send an email notification to the systems administrators • When the average EC2 instance CPU usage exceeds 60 percent for three minutes launch another EC2 instance • Increas e the capacity units of a DynamoDB table when excessive throttling occurs You can implement responses to metrics based alarms using built in notifications or by writing custom Lambda functions in Python Nodejs Java or C# Figure 6 shows a n example of how a CloudWatch alarm use s Amazon Simple Notification Service ( Amazon SNS) to trigger a DynamoDB capacity update CloudWatch Alarm Th rottledEvents > 2 over 5 minutesSNS Notification Publish to DynamoDB TopicLambda Function Cal l API DynamoDB UpdateTable Figure 6 : Example of a CloudWatch alarm flow Amazon CloudWatch Logs Amazon CloudWatch Logs monitors and stores logs from Amazon EC2 AWS CloudTrail and other sources EC2 instances can ship logging information using the CloudWatch Logs Agent and logging tools such as Logstash Graylog and Fluentd 60 Logs stored in Amazon S3 can be sent to CloudWatch Logs by configuring an Amazon S3 event to trigger a Lambda function ArchivedAmazon Web Services – Infrastructure as Code Page 20 Ingested log data can be the basis for new CloudWatch metrics that can in turn trigger Cloud Watch alarms You can use this capability to monitor any resource that generates logs without writing any code whatsoever If you need a more advanced response procedure you can create a Lambda function to take the appropri ate actions For example a Lambda function can use the SESSendEmail or SNSPublish APIs to pu blish information to a Slack channel when NullPointerException errors appear in production logs 61 62 Log processing and correlation allow for deeper analysis of application behavior s and can expose internal details that are hard to figure out from metrics CloudWatch Logs provides both the storage and analysis of logs and processing to enable data driven responses to operational issues Amazon CloudWatch Events Amazon CloudWatch Events produces a stream of events from changes to AWS environments applies a rules engine and delivers matching events to specified target s Examples of even ts that can be streamed include EC2 instance state changes Auto Scaling actions API calls published by CloudTrail AWS c onsole sign ins AWS Trusted Advisor opti mization notifications custom application level events and time scheduled actions Targets can include built in actions such as SNS notifications or custom responses using Lambda functions The ability of an infrastructure to respond to selected events offers benefits in both operations and security From the operations perspective events can automate maintenance activities without having to manag e a separate scheduling system With regard to information security events can provide notifications of console logins authentication failures and risky API calls recorded by CloudTrail In both realms incorporating event responses into the infrastructure code promotes a greater degree of selfhealing and a higher level of operational maturity Best Practices Here are some recommendations for best practices related to monitoring : • Ensure that all AWS resources are emitting metrics • Create CloudWatch a larms for metrics that provide the appropriate responses as metric related events arise ArchivedAmazon Web Services – Infrastructure as Code Page 21 • Send logs from AWS resources including Amazon S3 and Amazon EC2 to CloudWatc h Logs for analysis using log stream triggers and Lambda functions • Schedule ongoing maintenance tasks with CloudWatch and Lambda • Use CloudWatch custom events to respond to application level issues Summary Monitoring is essential to understand systems behavior and to automat e data driven reactions CloudWatch collects observations from runtime environments in the form of metrics and logs and makes those actionable through alarms streams and events Lambda functions written in Python Nodejs Java or C# can respond to events thereby extending the role of Infrastructure as Code to the operational realm and improving the resiliency of operating environments Governance and Compliance Hav ing considered how you can use Infrastructure as Code to monitor the health of your organization’s environments we now turn our focus to technolo gy governance and compliance Many organizations require visibility into their infrastructures to address indu stry or regulatory requirements The dynamic provisioning capabilities of the cloud pose special challenges because visibility and governance must be maintained as r esources are added removed or updated Consider the following situations : • A user is added to a privileged administration group and the IT organization is unable to explain when this occurred • The network access rules restricting remote management to a limited set of IP addresses are modified to allow access from additional locations • The RAM and CPU configurations for several servers has unexpectedly doubled resulting in a much larger bill than in previous months Although you have visibility into the current state of your AWS resource configurations using the AWS CLI and API calls addr essing the se situations requires the ability to look at how those resources have change d over time To address this need AWS offers the AWS Config service63 ArchivedAmazon Web Services – Infrastructure as Code Page 22 AWS Config AWS Config enables you to assess audit and evaluate the configurations of your AWS resources AWS Config automatically builds an inventory of your resources and track s changes made to them Figure 7 shows an example of a n AWS Config inventory of EC2 instances Figure 7 : Example of an AWS Config resource inventory AWS Config also provides a clear view of the resource change timeline including changes in both the resource configurations and the associations of those resources to other AWS resources Figure 8 shows the information maintained by AWS Config for a VPC resource ArchivedAmazon Web Services – Infrastructure as Code Page 23 Figure 8 : Example of AWS Config resource timeline When many different resources are changing frequently and automatically automating compliance can become as important as automating the delivery pipeline To respond to changes in the environment you can use AWS Config rules AWS Config Rules With AWS Config rules every change triggers an evaluation by the rules associated with the resources AWS provides a collection of managed rules for common requirements such as IAM users having good passwords groups and policies or for determining if EC2 instances are on the correct VPCs and Security Groups AWS Config rules can quickly identify noncompliant resources and help with reporting and remediation For val idations beyond those provided by the managed rules AWS Config rules also support the creation of custom rules using Lambda functions 64 These rules become part of the infrastructure code base thus bringing the concept of Infrastructure as Code to the governance and compliance stages of the information resource lifecycle Rule Structure When a custom rule is invoked through AWS Config rules the associated Lambda function receives the configuration events processes them and returns results The following function determines if Amazon Virtual Private Cloud (Amazon VPC) flow logs are enabled on a given Amazon VPC ArchivedAmazon Web Services – Infrastructure as Code Page 24 import boto3 import json def evaluate_compliance(config_item vpc_id): if (config_item['resourceType'] != 'AWS::EC2::VPC'): return 'NOT_APPLICABLE' elif is_flow_logs_enabled(vpc_id): return 'COMPLIANT' else: return 'NON_COMPLIANT' def is_flow_logs_enabled(vpc_id): ec2 = boto3client('ec2') response = ec2describe_flow_logs( Filter=[{'Name': 'resource id''Values': [vpc_id]}] ) if len(response[u'FlowLogs']) != 0: return True def lambda_handler(event context): invoking_event = jsonloads(event['invokingEvent']) compliance_value = 'NOT_APPLICABLE' vpc_id = invoking_event['configurationItem']['resourceId'] compliance_value = evaluate_compliance( invoking_event['configurationItem'] vpc_id) config = boto3client('config') response = configput_evaluations( Evaluations=[ { 'ComplianceResourceType': invoking_event['configurationItem']['resourceType'] 'ComplianceResourceId': vpc_id 'ComplianceType': compliance_value 'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTim e'] } ] ResultToken=event['resultToken']) Figure 9 : Example of a Lambda function to support AWS Config rules ArchivedAmazon Web Services – Infrastructure as Code Page 25 In this example w hen a configuration event on an Amazon VPC occurs the event passes to the function lam bda_handler This code extracts the ID of the Amazon VPC and uses the describe_flow_logs API call to determine whether the flow logs are enabled The Lambda function returns a value of COMPLIANT if the flow logs are enabled and NON_COMPLIANT otherwise Best Practices Here are some recommendations for implementing AWS Config in your environments : • Enable AWS Config for all regions to record the configuration item history to facilitate auditing and compliance tracking • Implement a process to respond to change s detected by AWS Config This could include email notifications and the use of AWS Config rules to respond to changes programmatically Summary AWS Config extends the concept of infrastructure code into the realm of governance and compliance AWS Config can continuously record the configuration of resources while AWS Config rules allow for event driven responses to changes in the configuration of tracked resources You can use this capability to assist your organization with the monitoring of compliance controls Resource Optimization We now focus on the final stage in the information resource lifecycle resource optimization In this stage administrators review performance data and identify changes needed to optimize the environment around criteria such as security performance and cost management It’s important for all application stakeholders to regularly evaluate the infrastructure to determine if it is being used optimally Consider the following questions : • Are there provisioned resources that are underutilized? ArchivedAmazon Web Services – Infrastructure as Code Page 26 • Are there ways to reduce the charges associated with the operating environment ? • Are there any suggestions for improving the performance of the provisioned resources ? • Are there any service limits that apply to the resources used in the environment and if so is the current usage of resources close to exceeding these limits ? To answer these questions we need a way to interrogate the operating environment retrieve data related to optimization and use that data to make meaningful d ecisions To address this need AWS offers AWS Trusted Advisor 65 AWS Trusted Advisor AWS Trusted Advisor helps you observe best practices by scanning your AWS resources and comparing their usage against AWS best practices in four categories: cost optimization performance security and f ault tolerance As part of ongoing improvement to your infrastructure and applications taking advantage of Trusted Advisor can help keep your resources provisioned optimally Figure 10 shows a n example of the Trusted Advisor dashboard Figure 10 : Example of the AWS Trusted Advisor dashboard Checks Trusted Advisor provides a variety of check s to determine if the infrastructure is following best practices The checks include detailed descriptions of ArchivedAmazon Web Services – Infrastructure as Code Page 27 recommended best practices alert criteria guidelines for action and a list of useful resources on the topic Trusted Advisor provides the resul ts of the checks and can also provide ongoing weekly notifications for status updates and cost savings All customers have access to a core set of Trusted Advisor checks Customers with AWS Business or Enterprise support can access all Trusted Advisor che cks and the Trusted Advisor APIs Using the APIs you can obtain information from Trusted Advisor and take corrective action s For example a program could leverage Trusted Advisor to examine current account service limits If current resource usage s approach the limits you can automatically create a support case to increase th e limit s Additionally Trusted Advisor now integrates with Amazon CloudWatch Events You can design a Lambda function to notify a Slack channel when the status of Trusted Advisor checks changes These examples illustrate how the concept of Infrastructure as Code can be extended to the resource optimization level of the information resource lifecycle Best Practices The best practices for AWS Trusted Advisor appear below • Subscr ibe to Trusted Advisor notifications through email or an alternative delivery system • Use distribution lists and ensure that the appropriate recipients are included on all such notifications • If you have AWS Business or Enterprise support use the AWS Support API in conjunction with Trusted Advisor notifications to create cases with AWS Support to perform remediation Summary You must continuously monitor your infrastructure to optimize the infrastructure resources with regard to p erformance security and cost AWS Trusted Advisor provides the ability to use APIs to interrogate your AWS infrastructure for recommendations thus extending Infrastructure as Code to the optimization phase of the information resource lifecycle ArchivedAmazon Web Services – Infrastructure as Code Page 28 Next Steps You can be gin the adoption of Infrastructure as Code in your organization by viewing your infrastructure specifications in the same way you view your product code AWS offers a wide range of tools that give you more control and flexibility over how you provision manage and operationalize your cloud infrastructure Here are some key actions you can take as you implement Infrastructure as Code in your organization : • Start by using a managed source control service such as AWS CodeCommit for your infrastructure code • Incorporate a quality control process via unit tests and static code analysis before deployments • Remove the human element and automate infrastructure provisioning including infrastructure permission policies • Create idempotent infrastructure code that you can easily redeploy • Roll out every new update to your infrastructure via code by updating your idempotent stacks Avoid making one off changes manually • Embrace endtoend automation • Include infrastructure automation work as part of regular product sprints • Make your changes auditable and make logging mandatory • Define common standards across your organization and continuously optimize By embracing these principles your infrastructure can dynamically evolve and accelerate with your rapidly changing busi ness needs Conclusion Infrastructure as C ode enables you to encode the definition of infrastructure resources into configuration files and control version s just like application ArchivedAmazon Web Services – Infrastructure as Code Page 29 software We can now update our lifecycle diagram and show how AWS support s each stage through code AWS CloudFormation AWS OpsWorks for Chef Automate Amazon EC 2 Systems Manager AWS ConfigAWS Trusted Advisor Amazon CloudWatch Figure 11: Information resource lifecycle with AWS AWS CloudFormation AWS OpsWorks for Chef Automate Amazon EC2 Systems Manager Amazon CloudWatch AWS Config and AWS Trusted Advisor enable you to integrate the concept of Infrastructure as Code into all phases of the project lifecycle By u sing Infrastructure as Code your organization can auto matically deploy consistently built environments that in turn can help your organization to improve its overall maturity ArchivedAmazon Web Services – Infrastructure as Code Page 30 Contributors The following individuals and organizations contributed to this document: • Hubert Cheung solutions architect Amazon Web Services • Julio Faerman technical evangelist Amazon Web Services • Balaji Iyer professional services consultant Amazon Web Services • Jeffrey S Levine solutions architect Amazon Web Services Resources Refer to the following resources to learn more about our best practices related to Infrastructure as Code Videos • AWS re:Inv ent 2015 – DevOps at Amazon66 • AWS Summit 2016 DevOps Continuous Integration and Deployment on AWS67 Documentation & Blogs • DevOps and AWS68 • What is Continuous Integration69 • What is Continuous Delivery70 • AWS DevOps Blog71 Whitepapers • Introduction to DevOps on AWS72 • AWS Operational Checklist73 • AWS Security Best Practices74 • AWS Risk and Compliance75 ArchivedAmazon Web Services – Infrastructure as Code Page 31 AWS Support • AWS Premium Support76 • AWS Trusted Advisor77 1 https://awsamazoncom/cloudformation/ 2 https://awsamazoncom/ec2/ 3 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/using cfnupdating stacks changesetshtml 4 http://awsamazoncom/iam 5 https://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cloudf ormation limitshtml 6 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws properties stackhtml 7 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/walkth rough crossstackrefhtml 8 http://docsawsamazoncom/AWSCloudFormation/latest/APIReference/API _ValidateTemplatehtml 9 http://awsamazoncom/s3 10 https://stelligentcom/2016/04/07/finding security problems early inthe development process ofacloudformation template with cfnnag/ 11 https://wwwnpmjscom/package/cfn check 12 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml 13 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#organizingstacks 14 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#use iamtocontrol access Notes ArchivedAmazon Web Services – Infrastructure as Code Page 32 15 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#reuse 16 http://docsawsamazoncom/AWSCloudFormation/latest/User Guide/best practiceshtml#nested 17 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cross stack 18 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#creds 19 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#parmtypes 20 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#parmconstraints 21 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cfninit 22 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtm l#helper scripts 23 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#validate 24 https://awsamazoncom/ec2/systems manager/parameter store/ 25 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#donttouch 26 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cfn best practices changesets 27 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#stackpolicy 28 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#cloudtrail 29 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#code 30 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/best practiceshtml#update ec2linux 31 https://awsamazoncom/ec2/systems manager/ 32 https://awsamazoncom/opsworks/chefautomate/ ArchivedAmazon Web Services – Infrastructure as Code Page 33 33 http://docsawsamazoncom/AWSEC2/latest/UserGuide/execute remote commandshtml 34 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manager inventoryhtml 35 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manager statehtml 36 http://docsawsamazoncom/AWSEC2/latest/UserGuide/systems manag er amihtml 37 https://awsamazoncom/ec2/systems manager/patch manager/ 38 https://awsamazoncom/ec2/systems manager/automation/ 39 https://awsamazoncom/ec2/systems manager/parameter store/ 40 https://awsamazoncom/blogs/mt/replacinga bastion host with amazon ec2systems manager/ 41 http://docsawsamazoncom/systems manager/latest/userguide/send commands multiplehtml 42 http ://docsawsamazoncom/systems manager/latest/userguide/sysman configuring access iamcreatehtml 43 https://awsamazoncom/blogs/mt/replacinga bastio nhost with amazon ec2systems manager/ 44 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/ec2 configuration managehtml 45 https://awsamazoncom/blogs/security/how toremediate amazon inspector security findings automatically/ 46 http://docsawsamazoncom/systems manager/latest/userguide/ssm sharinghtml 47 http:/ /docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 48 http://docsawsamazoncom/systems manager/latest/userguide/sysm an paramstore walkhtml 49 https://awsamazoncom/blogs/compute/managing secrets foramazon ecs applications usingparameter store andiamroles fortasks/ 50 https://enwikipediaorg/wiki/Lint_(software) 51 https://docschefio/rubocophtml ArchivedAmazon Web Services – Infrastructure as Code Page 34 52 https://docschefio/foodcritichtml 53 https://docschefio/chefspechtml 54 https://docschefio/kitchen html 55 https://awsamazoncom/cloudwatch/ 56 https://awsamazoncom/dynamodb/ 57 https://awsamazoncom/ec2/ 58 https://awsamazoncom/elasticloadbalancing/ 59 https://awsamazoncom/lambda/ 60 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/QuickStartEC2 Instancehtml 61 http://docsawsamazoncom/ses/latest/APIReference/API_S endEmailhtml 62 http://docsawsamazoncom/sns/latest/api/API_Publishhtml 63 https://awsamazoncom/config/ 64 http://docsawsamazoncom/config/latest/developerguide/evaluate config_develop ruleshtml 65 https://awsamazoncom/premiumsupport/trustedadvisor/ 66 https://wwwyoutubecom/watch?v=esEFaY0FDKc 67 https://wwwyoutubecom/watch?v=Du rzNeBQ WU 68 https://awsamazoncom/devops/ 69 https://awsamazoncom/devops/continuous integration/ 70 https://awsamazoncom/devops/continuous delivery/ 71 https://awsamazoncom/blogs/devops/ 72 https://d0aws staticcom/whitepapers/AWS_DevOpspdf 73 https://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 74 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practic espdf 75 https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Complia nce_Whitepaperpdf ArchivedAmazon Web Services – Infrastructure as Code Page 35 76 https://awsamazoncom/premiumsupport/ 77 https://awsamazoncom/premiumsupport/trustedadvisor/
|
General
|
consultant
|
Best Practices
|
Infrastructure_Event_Readiness
|
ArchivedInfrastructure Event Readiness AWS Guidelines and Best Practices December 2018 This paper has been archived For the latest technical guidance about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent asses sment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual comm itments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between A WS and its customers ArchivedContents Introduction 1 Infrastructure Event Readiness Planning 2 What is a Planned Infrastructure Event? 2 What Happens During a Planned Infrastructure Event? 2 Design Principles 4 Discrete Workloads 4 Automation 8 Diversity/Resiliency 10 Mitigating Against External Attacks 13 Cost Optimization 16 Event Management Process 17 Infrastructure Event Schedule 17 Planning and Preparation 18 Operational Readiness (Day of Event) 27 Post Event Activi ties 29 Conclusion 31 Contributors 32 Further Reading 32 Appendix 33 Detailed Architecture Review Checklist 33 ArchivedAbstract This whitepaper describes guidelines and best practices for customers with production workloads deployed on Amazon Web Services (AWS) who want to design and provision their cloud based applications to handle planned scaling events such as product launches or seasonal traffic spikes gracefully and dynamically We address general design principles as well as provide specific best practices and guidance across multiple concept ual areas of infrastructure event planning We then describe operational readiness considerations and practices a nd post event activities ArchivedAmazon Web Services – Infrastructure Event Readiness Page 1 Introduction Infrastructure event readiness is about designing and preparing for anticipated and significant events that have an impact on your business During such events it is critical that the company web service is reliable responsive and highly fault tolerant ; under all conditions and changes in traffic patterns Examples of such events are expansion into new territories new product or feature launches seasonal events or significant business announcement s or marketing e vents An infrastructure event that is not properly planned c an have a negative impact on your company’s business reputation continuity or finances Infrastructure event failures can take the form of unanticipated service failures load related performance degradations network latency storage capacity limitation s system limits (such as API call rates ) finite quantities of available IP addresses poor understanding of the behaviors of components of an application stack due to insufficient monitoring unanticipated dependencies on a third party service or component not set up for scale or some other unforeseen error condition To minimize the risk of unanticipated failures during an important event companies should invest time and resources to plan and prepare to train employees and to design and document relevant processes The amount of investment in infrastructure event planning for a particular cloud enabled application or set of applications can vary depending on the system’s complexity and global reach Regardless of the scope or comple xity of a company’s cloud presence the design principles and best practices guidanc e provided in this whitepaper are the same With Amazon Web Services (AWS) your company can scale up its infrastructure in preparation for a planned scaling event in a dyn amic adaptable pay asyougo basis Amazon’s rich array of elastic and programmable products and services gives your company access to the same highly secure reliable and fast infrastructure that Amazon uses to run its own global network and enables your company to nimbly adapt in response to its own rapidly changing business requirements This whitepaper outlines best practices and design principles to guide your infrastructure event planning and execution and how you can use AWS ArchivedAmazon Web Services – Infrastructure Event Readiness Page 2 services to ensure that your applications are ready to scale up and scale out as your business needs dictate Infrastructure Event Readiness Planning This section describes what constitutes a planned infrastructure event and the kinds of activities that typically occur during s uch an event What is a Planned Infrastructure Event ? A planned infrastructure event is a business driven anticipated and scheduled event window during which it is business critical to maintain a highly responsive highly scalable and fault tolerant web service Such events can be driven by marketing campaigns news events related to the company’s line of business product launches territorial expansion or any similar activity that results in additional traffic to a company’s web based applications and underlying infrastructure What Happens During a Planned Infrastructure Event ? The primary concern in most planned infrastructure events is being able to add capacity to your infrastructure to meet higher traffic demands In a traditional onpremise s environment provisioned with physical compute storage and networking resources a company’s IT department provision s additional capacity based on their best estimates of a theoretical maximum peak This incurs the risk of insufficiently provisioning capacity and the company suffering business loss due to overloaded web servers slow response times and other run time errors Within the AWS Cloud infrastructure is programmable and elastic This means it can be provisioned quickly in response to real time demand Additionally infrastructure can be configured to respond to system metrics in an automated intelligent and dynamic fashion —growing or shrinking resources such as web server clusters provisioned throughput storage capacity available compute cores number of streaming shards and so on as needed ArchivedAmazon Web Services – Infrastructure Event Readiness Page 3 Additionally m any AWS services are fully managed includ ing storage database analytic application and deployment services As a result AWS customers don’t have to worry about the complexities of configuring these services for a high traffic event AWS fully managed services are designed for scalabilit y and high availability Typically in preparation for a planned infrastructure event AWS customer s conduct a system review to evaluate their applic ation architecture and operational readiness considering both scalability and fault tolerance Traffic estimates are considered and compared to normal business activity performance Capacity metrics and estimate s of required additional capacity are determined Potential bottlenecks and third party upstream and downstream dependencies are identified and addressed Geography is also considered if the planned event includes an expansion of territory or introduction of new audiences Expansion into additional AWS Regions or Availability Z ones is undertaken in advance of the planned event A review of the customer’s AWS dynamic system settings such as Auto Scaling load balancing geo routing high availability and f ailover measures is also conducted to ensure these are configured to correctly handle the expected increases in volume and transaction rate Static settings such as AWS resource limits and location of content delivery network ( CDN ) origin servers are also considered and modified as needed Monitoring and notification mechanism s are reviewed and enhanced as needed to provide realtime transparency into events as they occur and for post mortem analysis after the planned event has completed During the planned event AWS customers can open support cases with AWS for troubleshooting or real time support (such as a server going down ) Customer s who subscribe to the AWS Enterprise S upport plan have the additional flexibility to talk with support engineers immediately and to raise critical severity cases if rapid response is required After the event AWS resources are designed to automatically scale down to appropriate levels to match traffic levels or continue to scale up as events dictate ArchivedAmazon Web Services – Infrastructure Event Readiness Page 4 Design Principles Preparation for planned events starts with a design at the beginning of any implementation of a cloud based application stack or workload that follows best practices Discrete W orkloads A design based on best practices is essential to th e effective management of planned event workloads at both normal and elevated traffic levels From the start design discrete and independent functional groupings of resources centered on a specific business application or product This section describes the multiple dimensions to this design goal Tagging Tags are used to label and organize resources They are an essential component of managing infrastructure resources during a planned infrastructure event On AWS tags are customer managed keyvalue labels applied to an individual managed resource such as a load balancer or an Amazon Elastic Compute Cloud ( Amazon EC2) instance By referencing well defined tags that have been attached to AWS resources you can easily identify which resources within your over all infrastructure comprise your planned event workload Then using this information you can analyz e it for preparedness Tags can also be used for cost allocation purposes Tags can be used to organize for example Amazon EC2 instances Amazon Machine Image ( AMI) images load balancers security groups Amazon Relational Database Service ( Amazon RDS) resources Amazon Virtual Private Cloud ( Amazon VPC) resources Amazon Route 53 health check s and Amazon Simple Storage Service ( Amazon S3) buckets For more information on effective tagging strategies refer to AWS Tagging Strategies 1 For examples of how to create and manage tags and put them in Resource Groups see Resource Groups and Tagging for AWS 2 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 5 Loose Coupling When architecting for the cloud design e very component of your application stack to operate as independently as possible from each other This gives cloud based workloads the advantage of resiliency and scalability You can reduce interdependencies between components in a cloud based application stack by design ing each component as a black box with well defined interfaces for inputs and outputs (for example RESTful APIs) If the components are n’t applications but are services that together comprise an application this is known as a microservices architecture For communication and coordination between application components you can use event driven notification mechanisms such as AWS message queues to pass messages between the components as shown in Figure 1 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 6 Figure 1 Loose coup ling using RESTful interfaces and message queues Using mechanisms such as that illustrated above a change or failure in one component has much less chance of cascading to other components For example i f a server in a multi tiered application stack becomes unresponsive applications that are loosely coupled can be designed to bypass the unresponsive tier or switch to degraded mode alternative transactions Loosely coupled application components using intermediate message queues can also be designed f or asynchronous integration Because an application’s components do not employ direct point topoint communication but instead use an intermediate and persistent messaging layer (for example an Amazon Simple Queue Service ( SQS) queue or a streaming data m echanism like Amazon Kinesis Stream s) they can withstand sudden increases in activity in one component while downstream components process the incoming queue ArchivedAmazon Web Services – Infrastructure Event Readiness Page 7 If there is a component failure the messages persist in the queues or streams until the failed component can recover For more information on message q ueueing and notification services offered by AWS refer to Amazon Simple Queue Service 3 Services Not Servers Managed services and service endpoints free you from having to worry about security or access backups or restores patch management or change control monitoring or reporting setups or administ ration of traditional systems management details These cloud resources can be provisioned prior to an event for high availability and resilience using multiple Availability Zone (or in some cases multiple Region ) configurations Cloud resources can be scaled up or down often with no downtime and you can configure them on the fly through either the AWS M anagement Console or API/CLI calls Managed services and service endpoints can be used to power customer application stacks with capabilities such a s relational and NoSQL database systems data warehousing event notification object and file storage real time streaming big data analytics machine learning search transcoding and many others An endpoint is a URL that is the entry way for an AWS service For example https://dynamodbus west 2amazonawscom is an entry point for the Amazon DynamoDB s ervice By using managed services and their service endpoints you can leverage the power of production ready resources as part of your design solution for handling increased volume reach and transaction rates during a planned infrastructure event You d on’t need to provision and administer your own servers that perform th e same functions as managed services For more information on AWS service endpoint s see AWS Regions and Endpoints 4 See also Amazon EMR 5 Amazon RDS 6 and Amazon ECS7 for examples of managed services that have endpoints Serverless Archi tectures Leverage AWS Lambda as a strategy to effectively respond to dynamically changing processing loads during a planned infrastructure event Lambda is an event driven serverless computing platform It ’s a dynamically invoked ArchivedAmazon Web Services – Infrastructure Event Readiness Page 8 service that runs Python Nodejs or Java code in response to events (via notifications) and automatically manages the compute resources specified by that code Lambda doesn’t require provisioning prior to the event of Amazon Elastic Compute Cloud ( EC2) resources The Amazon Si mple N otification Service ( Amazon SNS) can be configured to trigger Lambda functions See Amazon Simple Notification Service8 for details Lambda serverless functions can execute code that access or invoke other AWS services such as database operations data transformation s object or file retrieval or even scaling operations in response to external events o r internal system load metrics AWS Lambda can also generate new notifications or events of its own and even launch other L ambda functions AWS Lambda provides the ability to exercise fine control over scaling operations during a planned infrastructure event For example Lambda can be used to extend the functionality of Auto Scaling operations to perform actions such as notifying third party systems that they also need to scale or for adding additional network interfaces to new instances as they are provisioned See Using AWS Lambda with Auto Scaling Lifecycle Hooks9 for examples of how to u se Lambda to customize scaling operation s For more information on AWS Lambda see What is AWS Lambda?10 Automation Auto Scaling A critical component of infrastructure event planning is Auto Scaling Being able to automatically scale an application’s capacity up or down according to predefined conditions helps to maintain application availability during fluctuations in traffic patterns and volume that occur in a planned infrastructur e event AWS provides A uto Scaling capability across many of its res ources including EC2 instances database capacity containers etc Auto Scaling can be used to scale groupings of instances such as a fleet of servers that comprise a cloud based application so that they scale automatically based on specified criteria Auto Scaling can also be used to maintain a fixed number of instances even when an instance becomes ArchivedAmazon Web Services – Infrastructure Event Readiness Page 9 unhealthy This automatic scaling and maintaining of the number of instances is the core functionality of the Auto Scaling service Auto Scaling maintains the number of instances that you specif y by performing periodic health checks on the instances in the group If an instance becomes unhealthy the group terminates the unhealthy instance and launches another instance to replace it Auto S caling policies can be used to automatically increase or decrease the number of running EC2 instances in a group of servers to meet changing conditions When the scaling policy is in effect the Auto Scaling group adjusts the desired capacity of the group and launches or terminates the instances as needed either dynamically or alternatively on a schedule if there is a known and predictable ebb and flow of traffic Restarts and Recovery An important design element in any planned infrastructure event is to have procedures and automation in place to handle compromised instances or servers and to be able to rec over or restart the m on the fly Amazon EC2 instances can be set up to automatically recover when a system status check of the underlying hardware fails The instance reboot s (on new hardware if necessary) but retains its instance ID IP address Elastic IP addresses Amazon Elastic Block Store ( EBS) volume attachments and other configuration details For more information on auto recovery of EC2 instances see Auto Recovery of Amazon EC2 11 Configuration Management/Orchestration Integral to a robust reliable and responsive planned infrastructure event strategy is the incorporation of configuration management and orchestration tools for individual resource state management and application stack deployment Configuration management tools typically handle the provisioning and configuration of server instances load balancers Auto Scaling individual application deployment and application health monitoring They also provide the ability to int egrate with additional services such as databases stor age volumes and caching layers ArchivedAmazon Web Services – Infrastructure Event Readiness Page 10 Orchestration tools one layer of abstraction above configuration management provide the means to specify t he relationships of these various resource s allowing custom ers to provision and manage multiple resources as a unified cloud application in frastructure without worrying about resource dependencies Orchestration tools define and describe individual resources as well as their relationships as code As a result this code can be version controlled facilitating the ability to (1) roll back to prior versions or (2) create new branches for testing and development purposes It is also possible to define orchestrations and configurations optimized for an infrastructu re event and then roll back to the standard configuration following such an event Amazon Web Services recommends the following tools to achieve hardware as code deployments and orchestrations : • AWS Config with Config Rules or an AWS Config Partner to prov ide a detailed visual and searchable inventory of AWS resources configuration history and resource configuration compliance • AWS CloudFormation or third party AWS resource orchestration tool s to manage AWS resource provisioning update and termination • AWS OpsWorks Elastic Beanstalk or third party server configuration management tool s to manage operating system (OS ) and app lication configuration changes See Infrastructure Configuration Management for more details about ways to manage hardware as code12 Diversity/Resiliency Remove Single Points of Failure and Bottlenecks When planning for an infrastructure event analyze your application stacks for any single points of f ailure (SPOF) or performance bottlenecks For example is there any single instance of a server data volume d atabase NAT gateway or load balancer that would cause the entire application or significant portions of it to stop working if it were to fail? ArchivedAmazon Web Services – Infrastructure Event Readiness Page 11 Secondly as the cloud based application scales up in traffic or transaction volume is there any part of the infrastructu re that will encounter a physical limit or constraint such as network bandwidth or CPU processing cycles as the volume of data grows along the data flow path? These risks once identified can be mitigated in a variety of ways Design for Failure As ment ioned earlier using loose coupling and message queues with RESTful interfaces is a good strategy for achieving resiliency against individual resource failures or fluctuations in traffic or transaction volume Another dimension of resilient design is to configure application components to be as stateless as possible Stateless applications require no knowledge of prior transactions and have loose dependency on other application components They store no session information A stateless application can scale horizontally as a member of a pool or cluster since any request can be handled by any instance within the pool or cluster You can add more resources as needed using Auto Scaling and health check criteria to programmatically hand le fluctuating compute capacity and throughput requirements Once an application is designed to be stateless it could potentially be refactored onto serverless architecture using Lambda functions in the place of EC2 instances Lambda functions also have built in dynamic scali ng capability In the situation where an application resource such as a web server cannot avoid having state data about transactions consider designing your applications so that the portions of the application that are stateful are decoupled from the serv ers themselves For example an HTTP cookie or equivalent state data could be stored in a database such as DynamoDB or in an S3 bucket or EBS volume If you have a complex multistep workflow where there is a need to track the current state of each step i n the workflow Amazon Simple Workflow Service (SWF) can be used to centrally store execution history and make these workloads stateless Another resiliency measure is to employ distributed processing For u se cases that require processing large amounts of data in a timely manner where o ne ArchivedAmazon Web Services – Infrastructure Event Readiness Page 12 single compute resource can’ t meet the need you can design your workloads so that tasks and data are partitioned into smaller fragments and executed in parallel across a cluster of compute resources Distributed processi ng is stateless since the independent nodes on which the partitioned data and tasks are being processed may fail In this case auto restart of failed tasks on another node of the distributed processing cluster is automatically handled by the distributed processing scheduling engine AWS offers a variety of distributed data processing engine s such Amazon EMR Amazon Athena and Amazon Machine Learning ; each of which is a managed service providing endpoints and shield ing you from any complexity involving pa tching main tenance scaling failover etc For real time processing of streaming data Amazon Kinesis Streams can partition data in to multiple shards that can be processed by multiple consumers of that data such as Lambda functions or EC2 instances For more information on these types of workloads see Big Data Analytics Options on AWS 13 Multi Zone and MultiRegion AWS services are hosted in multiple locations worldwide These locations are composed of Regions and Availability Zones A Region is a separate geographic area Each Region has multiple isolated locations which are known as Availability Zones AWS provide s customers wit h the ability to place resources such as instances and data in multiple locations Design your applications so that they are distributed across multiple Availability Zones and Regions In conjunction with distributing and replicating resources across Availability Zones and Regions design your apps using load balancing and failover mechanisms so that your application stacks automatically re route data flows and traffic to these alternative locations in the event of a failure Load Balancing With the El astic Load Balancing service (ELB ) a fleet of application servers can be attached to a load balancer and yet be distributed across multiple Availability Zones When the EC2 instanc es in a particular Availability Zone ArchivedAmazon Web Services – Infrastructure Event Readiness Page 13 sitting behind a load balancer fail th eir health checks the load balancer stops sending traffic to those nodes When combined with Auto Scaling the number of healthy nodes is automatically rebalanced with the other Availability Zones and no manual intervention is require d It’s also possible to have load balancing across Regions by using Amazon Route 53 and latency based DNS routing algor ithms See Latency Ba sed Routing for more information14 Load Shedding Strategies The concept of load shedding in cloud based infrastructure s consists of redirect ing or proxying traffic elsewhere to relieve pressure on the primary systems In some cases the load shedding strat egy can be a triage exercise where you choose to drop certain streams of traffic or reduce functionality of your application s to lighten the processing load and to be able to serve at least a subset of the incoming requests There a re numerous techniques that can be used for load shedding such as caching or latency based DNS routing With latency based DNS routing the IP addresses of those application servers that are responding with the least latency are returned by the DNS servers in response to name resolution requests Caching can take place close to the application using an in memory caching layer such as Ama zon ElastiCache You can also deploy a caching layer that is closer to the user’s edge location using a global content distribution network such as Amazon CloudFront For more information about ElastiCache and CloudFront see Getting Started with ElastiCache 15 and Amazon CloudFront CDN 16 Mitigating Against External Attacks Distributed Denial of Service (DDoS) Attacks Planned infrastructure events can attract attention which may increase the risk of your application being targeted by a Distributed Denial of Service (DDoS) a ttack A DDoS attack is a deliberate attempt to make your application unavailable to users by flooding it with traffic from multiple sources These attacks include network layer attacks which aim to saturate the Internet capacity of a network or application transport layer attacks which aim to ArchivedAmazon Web Services – Infrastructure Event Readiness Page 14 exhaust the connection handling capacity of a device and application layer attacks which aim to exhaust the ability of an application to process requests There are numerous actions you can take at each of these la yers to mitigate against such an attack For example you can protect against saturation events by overprovisioning network and server capacity or implementing auto scaling technologies that are configured to react to attack patterns You can also make u se of purpose built DDoS mitigation systems such as application firewalls dynamic load shedding at the edge using Content Distribution Networks (CDNs) network layer threat pattern recognition and filtering or routing your traffic or requests through a D DoS mitigation provider AWS provides automatic DDoS protection as part of the AWS Shield Standard which is included in all AWS services in every AWS Region at no additional cost When a network or transport layer attack is detected it is automatically mitigated at the AWS border before the traffic is routed to an AWS Region To make use of this capability it is important to architect your application for DDoS resiliency The optimal DDoS resiliency is achieved by using services that operate from the AWS Global Edge Network like Amazon CloudFront and Amazon Route 53 which provides comprehensive protection against all known network and transport layer attacks For a reference architecture that includes these services see Figure 2 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 15 Summary of DDOS Miti gation Best Practices (BP) AWS Edge Locations AWS Regions Amazon CloudFront (BP1) with AWS WAF (BP2) Amazon Route 53 (BP3) Elastic Load Balancing (BP6) Amazon A PI Gateway (BP4) Amazon VPC (BP5) Amazon EC2 with Auto Scaling (BP7) Layer 3 ( for example UDP reflection) attack mitigation ✔ ✔ ✔ ✔ ✔ ✔ Layer 4 ( for example SYN flood) attack mitigation ✔ ✔ ✔ ✔ Layer 6 ( for example TLS) attack mitigation ✔ ✔ ✔ Reduce attack surface ✔ ✔ ✔ ✔ ✔ Figure 2: DDoS resilient reference architecture This reference architecture includes several AWS services that can help you improve your web application’s resiliency against DDoS attacks In addition to architecting for DDoS resiliency you can optionally subscribe to AWS Shield Advanced to receive add itional features that are useful for monitoring your application mitigating larger or more complex DDoS attacks and managing the cost of an attack With AWS Shield Advanced you can monitor for DDoS events via the provided APIs and AWS CloudWatch metrics In case of an attack that causes impact to the availability of your application you can raise a case with AWS Support and where necessary receive escalation to the AWS DDoS Response Team (DRT) You also receive AWS WAF for AWS Shield Advanced protected resources and AWS Firewall Manager at no additional cost If an attack causes an increase in your AWS bill AWS Shield Advanced allows you to request a limited refund of costs related to the DDoS event To learn more about using AWS Shield Advanced see Getting Started with AWS Shield Advanced 17 Bots and Exploits To mitigate application layer attacks consider operating your application at scale and implementing a Web Application Firewall (WAF) which allows you to ArchivedAmazon Web Services – Infrastructure Event Readiness Page 16 identify and block unwanted requests The combination of these techniques can help you mitigate high volume bots that could otherwise harm the availability of your application and lower volume bots that could steal content or exploit vulnerabilities Use these mitigation techniques to significantly reduce the volume of unwanted requests that reach your application and have resilience against unwanted requests that are not blocked On AWS you can implement a WAF from the AWS Marketplace or use AWS WAF which allows you to build your own rules or subscribe to rules managed by Marketplace vendors With AWS WAF you can use regular rules to block known bad patterns or rate based rules to temporarily block requests from sources that match conditions you define and exceed a given rate Deploy these rules using an AWS CloudFormation template If you have applications distributed across many AWS accounts deploy and manage AWS WAF rules for y our entire organization by using AWS Firewall Manager To learn more about deploying preconfigured protections with AWS WAF see AWS WAF Security Automations 18 To learn more about rules available from Marketplace vendors see Managed Rules for AWS WAF19 To learn more about managing rules with AWS Firewall Manager see Getting Started with AWS Firewall Manager20 Cost Optimization Reserved vs Spot vs On Demand Controlling the costs of provisioned resources in the cloud is c losely tied to the ability to dynamically provision these resources based on systems metrics and other performance and health check criteria With Auto Scaling resource utilization can be closely matched to actual processing and storage needs minimizing wasteful expense and underutil ized resources Another dimension of cost control in the cloud is being able to choose from the following: OnDemand instances Reserved I nstances (RIs) or Spot Instances In addition DynamoDB offers a reservation capacity capability With On Demand instances you pay for only the Amazon EC2 instances you use OnDemand instances let you pay for compute capacity by the hour with no long term commitments ArchivedAmazon Web Services – Infrastructure Event Readiness Page 17 Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to OnDemand instance pricing and provide a capacity reservation when used in a specific Availability Zone Aside from the availability reservation and the billing discount there is no functional difference between Reserved Instances and On Demand instances Spot Instances allow you to bid on spare Amazon EC2 computing capacity Spot Instances are often available at a discount compared to On Demand pricing which significantly reduce s the co st of running your cloud based applications When designing for the cloud some use cases are better suited for the use of Spot Instances than others For example since Spot I nstances can be retired at any time once the bid price goes above your bid you should consider running Spot Instances only for relatively stateless and horizontally scaled application stacks For stateful applications or expensive processing loads Reserved Instances or On Demand instances may be more appropriate For mission critical applications where capacity limitations are out of the question R eserved Instances are the optimal choice See Reserved Instances21 and Spot Instances22 for more details Event Management Process Planning for an infrastructure event is a group activity involvin g application developers administrators and business stakeholders Weeks prior to an infrastructure event establish a cadence of recurring meetings involving the key technical staff who own and operate each of the key infrastructure components of the web service Infrastructure Event Schedule Planning for an infrastructure event should begin several weeks prior to the date of the event A typical timeline in the p lanned event lifecycle is shown in Figure 3 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 18 Figure 3 Typical infrastructure event timeline Planning and Preparation Schedule We recommend the following schedule of activities in the weeks leading up to an infrastructure event: Week 1 : • Nominate a team to drive planning and engineering for the infrastructure event • Conduct m eeting s between stakeholders to understand the parameters of the event (scale duration time geographic reach affected workloads) and the success criteria • Engage any downstream or upstream partners and vendors Week s 23: • Review architecture and adjust as needed • Conduct operational r eview ; adjust as needed • Follow best practices described in this paper and in footnoted references • Identify risks and develop mitigation plans ArchivedAmazon Web Services – Infrastructure Event Readiness Page 19 • Develop a n event runbook Week 4 : • Review all cloud vendor services that require scaling based on expected load • Check service limits and increase limits as needed • Set up monitoring dashboard and alerts on defined thresholds Architecture Review An e ssential part of your preparation for an infrastructure event is an architectural review of the application stack that will experience the upsurge in traffic The purpose of the review is to verify and identify potential areas of risk to either the scalability or reliability of the application and to identify opportunities for optimization in advance of the event AWS provides its Enterprise Support customers a framework for reviewing customer application stacks that is centered around five design pillars These are Security Reliability Performance Efficiency Cost Optimization and Operational Excellence as described in Table 1 Table 1: Pillars of w ellarchitected applications Pillar Name Pillar Definition Relevant Area of Interest Security The ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies Identity Management Encryption Monitoring Logging Key Management Dedicated Instances Compliance Governance Reliability The ability of a system to recover from infrastructure or service failures dynamically acquire computing resources to meet demand and mitigate disru ptions such as misconfigurations or transient network issues Service Limits Multi ple Availability Zones and Region s Scalability Health Check/Monitoring Backup/ Disaster Recovery (DR) Networking Self Healing Automation Performance Efficiency The ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve Right AWS Services Resource Utilization Storage Architecture Caching Latency Requirements Cost Optim ization The ability to avoid or eliminate unneeded cost or suboptimal resources Spot/ Reserved Instances Environment Tuning Service Selection Volume Tuning Account Management Consolidated Billing Decommission Resources ArchivedAmazon Web Services – Infrastructure Event Readiness Page 20 Operational Excellence The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures Runbooks Playbooks Continuous Integration/Continuous Deployment (CI/CD ) Game Days Infrastructure as Code Root Cause Analysis ( RCA)s A detailed checklist of architectural review items which can be used to review an AWS based application stack is available in the Appendix Operational Review In addition to an ar chitectural review which is more focused on the design components of an application review your cloud operations and management practices to evaluate how well you are addressing the management of your cloud workloads The goal of the review is to identify operational gaps and issues and take actions in advance of the event to minimize them AWS offers a n operation al review to its Enterprise Support customers which can be a valuable tool for preparing for an infrastructure event The review focuses on asse ssing the following areas: • Preparedness –You must have the right mix of organizational structure processes and technology Ensure clear roles and responsibilities are defined for the staff managing your application stack Define processes in advance to align with the event Automat e procedures where possible • Monitoring–Effective mon itoring measur es how an application is performing Monitoring is critical to detecting anomalies before they become problems and provides opportunities to minimize impact from adverse events • Operations –Operational activities need to be carried out in a timely and reliable way leveragi ng automation wherever possible while also dealing with unexpected operational events that require escalations • Optimization –Conduct a postmortem analysis using collected metrics operational trends and lessons learned to capture and report opportunities for improvement during future events Optimi zation plus prepare dness creates a feedback loop to address operational issues and prevent them from reoccurr ing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 21 AWS Service Limits During a planned infrastructure event it is crucial to avoid exceeding any service limits that may be imposed by a cloud p rovider while scaling an application or workload Cloud services providers typically have limits on the different resources that you can use Limits are usually imposed on a per account and per region basis The resources affected include instances volumes streams serverless invocations snapshots number of VPCs security rules and so on Limits are a safety measure against runaway code or rogue actors attempting to abuse resources and as a control to help minimize billing risk Some service limi ts are raised automatically over time as you expand your footprint in th e cloud though most of these services require that you request limit increases by opening a support case While some s ervice limits can be increased via support cases other services have limits that can ’t be changed AWS provides Enterprise and Business support customer s with Trusted Advisor which provides a Limit Check dashboard to allow customers to proactively manage all service limit s For more information on limits for various A WS services and how to check them see AWS Service Limits23 and Trusted Advisor 24 Pattern Recognition Baselines You should document “back to healthy” values for key metrics prior to the commencement of an infrastructure event This help s you to determine when an application/service is safe ly returned to normal levels following the completion/end of the event For example identifying that the normal transaction rate through a load balancer is 2500 requests per second will help determine when it is safe to begin wind down procedures after the event Data Flows and Dependencies Understanding how data flows through the various components of an application helps you identify potential bottlenecks and dependencies Are the application tiers or components that are consumers of data in a data flow ArchivedAmazon Web Services – Infrastructure Event Readiness Page 22 sized appropriately and set up to auto scale correctly if the tiers or components in an application stack that are producers of data scale upwards? In the event of a component failure c an data be queued until that component recovers? Are any downstream or upstream data providers or consumers scalable in response to your event? Proportionality Review the proportionality of scal ing required by the various components of an application stack when preparing for an infrastructure event This proportionality is not always one toone For example a ten fold increase in transactions per second across a load balancer m ight require a twe ntyfold increase in storage capacity number of streaming shards or number of database read and write operations ; due to processing that might be taking place in the front facing application Communications Plan Prior to the event develop a communications plan Gather a list of internal stakeholders and support groups and identify who should be contacted at various stages of the event in various scenarios such as beginning of the event during the event end of the event post event analysis emergency contacts contacts during troubleshooting situations etc Persons and groups to be contacted may include the following: • Stakeholders • Operations managers • Developers • Support teams • Cloud service provider teams • Network operations cen ter (NOC ) team As you gather a list of internal contacts you should also develop a contact list of external stakeholders involved with the continuous live delivery of the application These stakeholders include partners and vendors supporting key components of the stack downstream and upstream vendors providing external services data feeds authentication services and so on ArchivedAmazon Web Services – Infrastructure Event Readiness Page 23 This external contact list should also include the following: • Infrastructure hosting vendors • Telecommunications vendors • Live data streaming partners • PR marketing contacts • Advertising partners • Technical consultants involved with service engineering Ask for the following i nformation from each provider: • Live points of contact during time of e vent • Critical support contact and escalation process • Name telephone number and email address • Verification that live technical contacts will be available AWS customers subscribed to Enterprise Support also have Technical Account Managers (TAMs) assigned t o their account who can coordinate and verify that dedicated AWS support staff is aware of and prepared for support of the event TAMs are also on call during the event present in the war room and available to drive support escalations if they are needed NOC Preparation Prior to the event instruct your operations and/or developer team to create a live metrics dashboard that monitors each critical component of the web service in production as the event occurs Ideally the dashboard should automatically present updated metrics every minute or at an interval that is suitable and effective during the event Consider monitoring the following c omponents: • Resource utilization of each server (CPU disk and memory utilization) • Web service response time • Web traffic metrics (users page views sessions) ArchivedAmazon Web Services – Infrastructure Event Readiness Page 24 • Web traffic per visitor region (global customer segments) • Database server utilization • Marketing flow conversion funnels such as conversion rates and fallout percentage • Applicat ion error logs • Heartbeat monitoring Amazon CloudWatch provides a means to gather most of these metrics from AWS resources into a single pane of glass using CloudWatch custom dashboards Additionally CloudWatch offers the capability to import custom metri cs in to CloudWatch wherever AWS isn’ t already providing that metric automatically See the Monitor section for more details on AWS monitoring tools and capabilities Runbook Preparation You should develop a runbook in preparation for the infrastructure event A runbook is an operational manual containing a compilation of procedures and operations that your operator s will carry out during the event Event runbooks can be outgrowths of existing runbooks used f or routine operations and exception handling Typically a runbook contains procedures to begin stop supervise and debug a system It should also describe procedures for handling unexpected events and contingencies A runbook should include the following s ections: • Event details : Briefly describe s of the event success criteria media coverage event dates and contact details of the main stakeholders from the customer side and AWS • List of AWS services: Enumerates all AWS services to be used during the event Also the expected load on these services Region s affected and account IDs • Architecture and application review : Document s load testing results any stress points in the infrastructure and application design resiliency measures for the wo rkload single points of failure and potential bottlenecks ArchivedAmazon Web Services – Infrastructure Event Readiness Page 25 • Operational review : Highlight s monitoring setup health criteria notification mechanisms and service restoration procedures • Preparedness checklist : Includes such considerations as service limits checks pre warming of application stack components such as load balancers pre provisioning of resources such as stream shards DynamoDB partitions S3 partitions and so on For more information see the Architecture Review Detailed Checklist in the Appendix Monitor Monitoring Plan Database application and operating system monitoring is crucial to ensure a successful event Set up c omprehensive monitoring systems to effectively detect and respond i mmediately to serious incidents during the infrastructure event Incorporate both AWS and customer monitoring data Ensure that monitoring tools are instrumented at the appropriate level for an application based on its business criticality Implementing a monitoring plan that collectively gathers monitoring data from all of your AWS solution segments will help in debugging a complex failure if it occurs The monitoring plan should address the following questions : • What monitoring tools and dashboard s must be set up for the event? • What are the monitoring objectives and the allowed thresholds? What events will trigger actions? • What resources and metrics from these resources will be monitored and how often must they be polled ? • Who will perform the monitoring tasks? What monitoring alerts are in place? Who will be alerted? • What remediation plan s have been set up for common and expected failures? What about unexpected events ? • What is the escalation process in the case of operational failure of any critical syste ms components ? The following AWS monitoring tools can be used as part of your plan : • Amazon CloudWatch : Provided as a n out ofthebox solution for AWS dashboard metrics monitoring alert ing and automated provisioning ArchivedAmazon Web Services – Infrastructure Event Readiness Page 26 • Amazon CloudWatch custom metrics : Used for operating systems application and business metrics collection The Amazon CloudWatch API allows for the collection of virtually any type of custom metric • Amazon EC2 instance health : Used for vie wing status checks and for scheduling events for you r instances based on their status such as auto rebooting or restarting an instance • Amazon SNS: Used for setting up operating and sending event driven notifications • AWS X Ray: Used to debug and analyz e distributed applications and microservices architecture by analyzing data flows across system components • Amazon Elasticsearch Service : Used for centralized log collection and realtime log analysis For rapid heuristic detection of problems • Third party tools : Used for a real time analytics and full stack monitoring and visibility • Standard operating system monitoring tools: Used for OS level monitoring For more details about AWS monitoring tools see Automated and Manual Monitoring25 See also Using Amaz on CloudWatch Dashboards26 and Publish Custom Metrics 27 Notifications A crucial operational element in your desig n for infrastructure event s is the configur ation of alarms and notifications to integrate with your monitoring solution s These alarms and notifications can be used with services such as AWS Lambda to trigger actions based on the alert Automating responses to operational events is a key element to enabling mitigation rollback and recovery with maximum responsiveness Tools should also be in place to centrally monitor workloads and create appropriate alerts and notifications based on available logs and metrics that relate to key operational indicators This includes alerts and notifications for outofbound anomalies as well as service or component failures Ideally when low performance thresholds are crossed or failures occur the s ystem has been architected to automatically self heal or scale in response to such notifications and alerts ArchivedAmazon Web Services – Infrastructure Event Readiness Page 27 As previously noted AWS offers services (Amazon Simple Queue Service (SQS) and Amazon SNS) to ensure appropriate alerting and notification in response to unplanned operational events as well as for enabling automated responses Operational Readiness (Day of Event) Plan Execution On the day of the event the core team involved with the infrastructure event should be on a conference call monitoring realtime dashboards Runbooks should be fully developed and available Make sure that t he communications plan is well defined and known to all support staff and stakeholders and that a contingency plan is in place War Room During the event have an open conference bridge with the following participants: • The responsible application and operations team s • Operations team leadership • Technical support resources from external partners directly involved with technical delivery • Business stakeholders Throughout mos t of the event the conversation of this conference b ridge should be minimal If an adverse operational event arises the key people who can respond to the event will already be on this bridge ready to act and consult Leadership Reporting During the event send an email hourly to key leadership stakeholders This update should include the following : • Status summary: Green (on track) Yellow (issues encountered) Red (major issue) • Key metrics update ArchivedAmazon Web Services – Infrastructure Event Readiness Page 28 • Issues encountered status of remedy plan and the estimated time to resolution (ETA) • Phone number of the war room conference bridge ( so stakeholders may join if needed ) At the conclusion of the event a summary email should be sent that follow s the following format : • Overall event summary with synopsis of issues encountered • Final metrics • Updated remedy plan that details the issues and resolutions • Key points ofcontact for any follow ups that stakeholders may have Contingency Plan Each step in the event’s preparation process should have a corresponding contingency action that has been verified in a test environment Address the following q uestions as you put together a contingency plan: • What are the worst case scenarios that can occur during the event? • What types of events would cause a negative public relations impact? • Which third party components and services m ight fail during the event? • Which metrics should be monitored that would indicate that a worst case scenario is occurring? • What is the rollback plan for each identified worst case scenario? • How long will each rollback process take? What is the acceptable Recovery Point Objective (RPO) and Recovery Time Objective (RTO)? (See Using AWS for Disaster Recovery28 for additional information on these concepts ) Consider the following t ypes of contingency plans : ArchivedAmazon Web Services – Infrastructure Event Readiness Page 29 • Blue/Green Deployment : If rolling out a new production app or environment keep the prior production build online and available (in case a switch back is needed) • Warm Pilot: Launch a minimal environment in a second Region that can quickly scale up if needed If a failure occu rs in the primary Region scale up the second Region and switch traffic over to it • Maintenance Mode Error Pages : Check any pre configured error page s and triggers at each layer of your web service Be prepared to inject a more specific message into these error pages if any operational failures of any of these layers occur s Test the contingency plan for each documented worst case scenario Post Event Activities Post Mortem Analysis We recommend a postmortem analysis as part of an infrastructure event management lifecycle Post mortems allow you to collaborate with each team involved and identify areas that m ight need further optimization such as operational procedures implementation details failover and recovery procedures etc This is especia lly relevant if an application stack encountered disruptions during the event and a r oot cause analysis (RCA) is needed A postmortem analysis help s provide data points and other essential information needed in an RCA document Wind Down Process Immediate ly following the conclusion of the infrastructure event the wind down process should begin During this period monitor relevant application s and services to ensure traffic has reverted back to normal production levels Use the health dashboards created during the event’s preparation phase to verify the normalization of traffic and transaction rates Wind down periods for some events may be linear and straightforward while others may experience uneven or more gradual reductions in volume Some traffic patterns from the event may persist For example recovering from a surge in traffic generally requires straightforward wind down procedures whereas an application deployment or expansion into a new geographical Region may have lon glasting effects requi ring you to careful ly monitor new traffic patterns as part of the permanent application stack ArchivedAmazon Web Services – Infrastructure Event Readiness Page 30 At some point following the completion of the event you must determine when it is safe to end event management operations Refer to the previously documented “normal” values for key metrics to help determine when to declare that an event is completed or ended We recommend splitting wind down activities into two branches which could have different timelines Focus the first branch on operational m anagement of the even t such as sending communications to internal and external stakeholders and partners and the resetting of service limits Focus the second branch on technical aspects of the wind down such as scale down procedures validation of the health of the environment and criteria for determining whether architectural changes should be reve rted or committed The timeline associated with each of those branches can vary depending on the nature of the event key metrics and c ustomer comfort We’ve outlined some common tasks associated with each branch in Tables 2 and 3 to help you determine the appropriate time toend management for an event Table 2: First branch: o perational winddown tasks Task Description Communications Notification to internal and external stakeholders that the event has ended The time toend communication should be aligned with the definition of the completion of the event Use “back to healthy” metrics to determine when it is appropriate to end commun ication Alternatively you can end communication in tiers For example you could end the war room bridge but leav e the event escalation procedures intact in case of post event failures Service Limits/Cost Containment Although it may be tempting to retain an elevated service limit after an event keep in mind that service limits are also used as a safety net Service limits protect you and your costs by preventing excess service usage be that a compromised account or misconfigured automation Repo rting and Analysis Data collection and collation of event metrics accom panied by analytical narratives showing patterns trends problem areas successful procedures ad hoc procedures timeline of event and whether or not success criteria were met shoul d be develope d and distributed to all internal parties identified in the communications plan A detailed cost analysis should also be developed to show the operational expense of supporting the event Optimization Tasks Enterprise organizations evolve over time as they continue to improve their operations Operational optimization requires the constant collection of metrics operational trends and lessons learned from events to uncover opportunities for improvement Optimiz ation ties back with prepar ation to form a feedback loop to address operational issues and prevent them from reoccurr ing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 31 Table 3: Second branch: t echnical winddown tasks Task Description Service Limits/Cost Containment Although it may be tempting to retain elevated service limit s after an event keep in mind that service limits also serve the purpose of being a safety net Service limits protect your operations and operating costs by prevent ing excess service usage either through malicious activity stemming from a compromis ed account or through misconfigured automation Scale Down Procedures Revert resources that were scaled up d uring the preparation phase The se ite ms are unique to your architecture but the following examples are common : • EC2/RDS instance size • Auto S caling configuration • Reserved capacity • Provisioned Input/Output Operations Per Second (PIOPS ) Validation of Health of Environment Compar e to baseline metrics and review production health to verify that after the event and after scale down procedures have been completed the systems affected are reporting normal behavior Disposition of Architectural Changes Some changes made in preparation for the event may be worth keeping depending on the nature of the event and observation of operational metrics For exam ple expansion into a new geographical Region might require a permanent increase of resources in that Region or raising certain service limits or configuration parameters such as number of partitions in a DB or shards in a stream of PIOPS in a volume might be a performance tuning measure that should be persisted Optimize Perhaps the most important component of infrastructure event management is the post event analysis and the identification of operational and architectural challenges observed and opp ortunities for improvement Infrastructure events are rarely one time events They might be seasonal or coincid e with new releases of an application or they might be part of the growth of the company as it expands into new markets and territories Thus every infrastructure event is an opportunity to observe improve and prepare more effectively for the next one Conclusion AWS provides building blocks in the form of elastic and programmable products and services that your company can assemble to support virtually ArchivedAmazon Web Services – Infrastructure Event Readiness Page 32 any scale of workload With AWS infrastructure event guidelines and best practices coupled with our complete set of highly available services your company can design and prepare for major business events and ensure that scaling demands can be met smoothly and dynamically ensuring fast response and global reach Contributors The following individuals and organizations contributed to this document: • Presley Acun a AWS Enterprise Support Manager • Kurt Gray AWS Global Solutions Architect • Michael Bozek AWS Sr Technical Account Manager • Rovan Omar AWS Technical Account Manager • Will Badr AWS Technical Account Manager • Eric Blankenship AWS Sr Technical Account Manager • Greg Bur AWS Technical Account Manager • Bill Hesse AWS Sr Technical Account Manager • Hasan Khan AWS Sr Technical Account Manager • Varun Bakshi AWS Sr Technical Account Manager • Fatima Ahmed AWS Specialist Technical Account Manager (Security) • Jeffrey Lyon AWS Manager DDoS Ops Engineering Further Reading For additional reading on operational and architectural best practices see Operational Checklists for AWS 29 We recommend that reader s review AWS Well Architected Framework30 for a structured approach to evaluating their cloud based application delivery stacks AWS offers Infrastructure Event Manage ment (IEM) as a premium support offering for customers desiring more direct involvement of AWS Technical Account Manager and Support Engineers in their design planning and day of event operations For more details about the AWS IEM premium support offerin g please see Infrastructure Event Management 31 ArchivedAmazon Web Services – Infrastructure Event Readiness Page 33 Appendix Detailed Architecture Review Checklist YesNo N/A Security Y—N—N/A We rotate our AWS Identity and Access Management ( IAM ) access keys and user password and the credentials for the resources involved in our application at most every 3 months as per AWS security best practices We apply password policy in every account and we use hardware or virtual multifactor authentication (MFA) devices Y—N—N/A We have internal security processes and controls for controlling unique role based least privilege access to AWS APIs leveraging IAM Y—N—N/A We have removed any confidential or sensitive information including embedded public/pr ivate instance key pairs and have reviewed all SSH authorized keys files from any customized Amazon Machine Images (AMIs) Y—N—N/A We use IAM roles for EC2 instances as convenient instead of embedding any credentials inside AMIs Y—N—N/A We segregate IAM administrative privileges from regular user privileges by creating an IAM administrative role and restricting IAM actions from other functional roles Y—N—N/A We apply the latest security patches on our EC2 instances for either Windows or Linux instan ces We use operating system access controls including Amazon EC2 Security Group rules VPC network access control lists OS hardening host based firewall intrusion detection/prevention monitoring software configuration and host inventory Y—N—N/A We e nsure that the network connectivity to and from the organization’s AWS and corporate environments uses a transport of encryption protocols Y—N—N/A We apply a centralized log and audit management solution to identify and analyze any unusual access pattern s or any malicious attacks on the environment Y—N—N/A We have Security event and incident management correlation and reporting processes in place Y—N—N/A We ma ke sure that there isn’t unrestricted access to AWS resources in any of our security groups Y—N—N/A We use a secure protocol (HTTPS or SSL) up todate security policies and cipher protocol s for a front end connection (client to load balancer) The requests are encrypted between the clients and the load balancer which is more secure Y—N—N/A We configure our Amazon Route 53 MX resource record set to have a TXT resource record set that contains a corresponding Sender Policy Framework (SPF) value to specify the servers that are authorized to send email for our domain Y—N—N/A We archite ct our application for DDoS resiliency by using services that operate from the AWS Global Edge Network like Amazon CloudFront and Amazon Route 53 as well as additional AWS services that mitigate against Layer 3 through 6 attacks (see Summary of DD oS Miti gation Best Practices in the Appendix ) ArchivedAmazon Web Services – Infrastructure Event Readiness Page 34 YesNo N/A Reliability Y—N—N/A We deploy our application on a fleet of EC2 instances that are deployed into an Auto Scaling group to ensure automatic horizontal scaling based on a pre defined scaling plans Learn more Y—N—N/A We us e an Elastic Load Balancing health check in our Auto Scaling group configuration to ensure that the Auto Scaling group acts on the health of the underlying EC2 instances (Applicable only if you use load balancers in Auto Scaling groups ) Y—N—N/A We deploy critical components of our applications across multiple Availability Zones are appropriately repl icating data between zones We test how failure within these components affects application availability using Elastic Load Balanc ing Amazon Route 53 or any appropriate third party tool Y—N—N/A In the database layer we deploy our Amazon RDS instances i n multiple Availability Zones to enhance database availability by synchronously replicating to a standby instance in a different Availability Zone Y—N—N/A We define processes for either automatic or manual failover in case of any outage or performance de gradation Y—N—N/A We use CNAME records to map our DNS name to our services We DON’T use A records Y—N—N/A We configure a lower timetolive (TTL) value for our Amazon Route 53 record set This avoid s delays when DNS resolvers request updated DNS recor ds when rerouting traffic (For example this can occur when DNS failover detects and responds to a failure of one of your endpoints ) Y—N—N/A We have at least two VPN tunnels configured to provide redundancy in case of outage or planned maintenance of the devices at the AWS endpoint Y—N—N/A We use AWS Direct Connect and have two Direct Connect connections configured at all times to provide redundancy in case a device is unavailable The connections are provisioned at different Direct Connect locations to provide redundancy in case a location is unavailable We configure the connectivity to our virtual private gateway to have multiple virtual interfaces configured across multiple Direct Connect connections and location s Y—N—N/A We us e Windows instances and en sure that we are using the latest paravirtual ( PV) drivers PV driver helps optimize driver performance and minimize runtime issues and security risks We ensure that EC2Config agent is running the latest version on our Windows instance Y—N—N/A We take snapshots of our Amazon Elastic Block Store (EBS) volumes to ensure a point intime recovery in case of failure Y—N—N/A We use separate Amazon EBS volumes for the operating system and application/database data wh ere appropriate Y—N—N/A We appl y the latest kernel software and drivers patches on any Linux instances ArchivedAmazon Web Services – Infrastructure Event Readiness Page 35 YesNo N/A Performance Efficiency Y—N—N/A We fully test our AWS hosted application components including performance testing prior to going live We also perform load testing to ensure that we have used the right EC2 instance size number of IOPS RDS DB instance size etc Y—N—N/A We run a usag e check report against our services limits and ma ke sure that the current usage across AWS services is at or less than 80% of the service limits Learn more Y—N—N/A We us e Content Delivery/Distribution Network (CDN) to utilize caching for our application (Amazon CloudFront) and as a way to optimize the delivery of the content and the automatic distribution of the content to the nearest edge location to the us er Y—N—N/A We understand that some dynamic HTTP request headers that Amazon CloudFront receives (User Agent Date etc) can impact the performance by reducing the cache hit ratio and increasing the load on the origin Learn more Y—N—N/A We ensure that the maximum throughput of an EC2 instance is greater than the aggregate maximum throughput of the attached EBS volumes We also use EBS optimized instances with PIOP S EBS volumes to get the expected performance out of the volumes Y—N—N/A We ensure that the solution design doesn’t have a bottleneck in the infrastructure or a stress point in the database or the application design Y—N—N/A We deploy monitoring on application resources and configure alarms based on any performance breaches using Amazon CloudWatch or third party partner tools Y—N—N/A In our designs we avoid using a large number of rules in security group (s) attached to our application instances A large number of rules in a security group may degrade performance YesNo N/A Cost Optimization Y—N—N/A We note whether the infrastructure event may involve over provisioned capacity that needs to be cleaned up after the event to avoid unnecessary cost Y—N—N/A We use right sizing for all of our infrastructure components including EC2 instance size RDS DB instance size caching cluster nodes size and numbers Redshift Cluster nodes size and numbers and EBS vol ume size Y—N—N/A We use Spot Instances when it’s convenient Spot Instances are ideal for workloads that have flexible start and end times Typical use cases for Spot instances are: batch processing report generation and high performance computing work loads Y—N—N/A We have predictable application capacity minimum requirements and take advantage of Reserved Instances Reserved Instances allow you to reserve Amazon EC2 computing capacity in exchange for a significantly discounted hourly rate compared to On Demand instance pricing ArchivedAmazon Web Services – Infrastructure Event Readiness Page 36 1 https://awsamazoncom/answers/account management/aws tagging strategies/ 2 https://awsamazoncom/blogs/aws/resource groups andtagging/ 3 https://awsamazoncom/sqs/ 4 http://docsawsamazoncom/general/latest/gr/randehtml 5 https://aw samazoncom/emr/ 6 https://awsamazoncom/rds/ 7 https://awsamazoncom/ecs/ 8 https://awsamazoncom/sns/ 9 https://awsamazoncom/blogs/compute/using awslambda with auto scaling lifecycle hooks/ 10 http://docsawsamazoncom/lambda/latest/dg/welcomehtml 11 https://awsamazoncom/blogs/aws/new auto recovery foramazon ec2/ 12 https://awsamazoncom/answers/configuration management/aws infrastructure configuration management/ 13 https://d0awsstaticcom/whitepapers/Big_Data_Analytics_Options_on_AW S%20pdf 14 http://docsawsamazoncom/Route53/latest/DeveloperGuide/routing policyhtml#routing policy latency 15 https://awsamazoncom/elasticache/ 16https://awsamazoncom/cloudfront/ 17 https://docsawsamazoncom/waf/latest/developerguide/getting started ddoshtml 18 https://awsamazoncom/answers/security/aws wafsecurity automations/ 19 https://awsamazoncom/mp/security/WAFManagedRules/ Notes ArchivedAmazon Web Services – Infrastructure Event Readiness Page 37 20 https://docsawsamazoncom/waf/latest/developerguide/ getting started fmshtml 21 http://docsawsamazoncom/AWSEC2/latest/UserGuide/concepts on demand reserved instanceshtml 22http://docsawsamazoncom/AWSEC2/latest/UserGuide/using spot instanceshtml 23 https://docsawsamazoncom/general/latest/gr/aws_service_limitshtml 24 https://awsamazoncom/about aws/whats new/2014/07/31/aws trusted advis orsecurity andservice limits checks nowfree/ 25 http://docsawsamazoncom/AWSEC2/latest/UserGuide/monitoring_auto mated_manualhtml 26 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/Cloud Watch_Dashboardshtml 27 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/publis hingMetricshtml 28 https://awsamazoncom/blogs/aws/new whitepaper useawsfor disaster recovery/ 29 http://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 30 http://d0awsstaticcom/w hitepapers/architecture/AWS_Well Architected_Frameworkpdf 31 https://awsamazoncom/premiumsupport/iem/
|
General
|
consultant
|
Best Practices
|
Installing_JD_Edwards_EnterpriseOne_on_Amazon_RDS_for_Oracle
|
Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle First published December 20 16 Updated March 25 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement betw een AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why JD Edwards EnterpriseOne on Amazon RDS? 1 Licensing 2 Performance management 3 Instance sizing 3 Disk I/O management —provisioned IOPS 4 High availability 4 High availability features of Amazon RDS 5 Oracle security in Amazon RDS 6 Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance 7 Prerequisites 7 Preparation 8 Key installation tasks 8 Creating your Oracle DB instance 8 Configure SQL Developer 13 Installing the platform pack 14 Modifying the default scripts 16 Advanced configuration 23 Running the installer 27 Logging into JD Edwards EnterpriseOne on the deployment server 28 Validation and testing 29 Running on Amazon RDS for Oracle Enterprise Ed ition 30 Conclusion 31 Appendix: Dumping deployment service to RDS 31 Contributors 33 Document revisions 33 Abstract Amazon Relational Database Service (Amazon RD S) is a flexible costeffective easy touse service fo r running relational database s in the cloud In thi s whitepaper you will learn how to deplo y Oracle’ s JD Edward s EnterpriseOne (version 92 ) using Amazon RD S for Oracle Because thi s whitepape r focuse s on the database component s of the installation process ite ms such a s JD Edwards EnterpriseOne application serve rs and application serve r node scaling will not be covered This whitepaper is aimed at IT directors JD Edwards EnterpriseOne architects CNC administrators DevOps engineers and Oracle Database Administrators Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 1 Introduction There are two ways to de ploy the Oracle database backend for a JD Edwards EnterpriseOne installation on Amazon Web Services (AWS): by using a database managed by the Amazon Relational Database Service (Amazon RDS) or by deploying and managing a database on Amazon Elastic Compute Cloud (Amazon EC2) infrastructure This whitepaper focuses on the deployment of JD Edwards EnterpriseOne in an AWS environment using Amazon RDS for Oracle Why JD Edwards EnterpriseOne on Amazon RDS? Simplicity scalability and stability are all important reasons to install the JD Edwards Enter priseOne applications suite on Amazon RDS Integrated high availability features provide seamless recoverability between AWS Availability Zones (AZs) without the complications of log shipping and Oracle Data Guard Using RDS you can quickly back up and restore your database to a chosen point in time and change the size of the server or speed of the disks all within the AWS Management Console Management advantages are at your fingertips with the AWS Console Mobile Application All this coupled with intelligent monitoring and management tools provid es a complete solution for implementing Oracle Database in Amazon RDS for use with JD Edwards EnterpriseOne When designing your JD Edwards EnterpriseOne footprint consider the entire lifecycle of JD Edwards EnterpriseOne on AWS which includes complete disaster recovery Disaster recovery is not an afterthought it’s encapsulated in the design fundamentals When your installation is complete you can take backups refresh subsid iary environments and manage and monitor all critical aspects of your environment from the AWS Management Console You can enable monitoring to ensure that everything is sized correctly and performing well Using Amazon RDS for Oracle you can have enterp risegrade high availability in the database layer implementing Amazon RDS Multi AZ configuration You can use this high availability feature even with Oracle Standard Edition to reduce the to tal cost of ownership (TCO) for running the JD Edwards application in the cloud AWS gives you the ability to disable hyperthreading and the numb er of vCPUs in use in your Amazon Elastic Compute Cloud (Amazon EC2) instances and your RDS for Oracle instances to reduce licensing cost and TCO In JD Edwards EnterpriseOne the application processing is CPU intensive and the CPU frequency and number of cores available to the enterprise server plays a large part affecting the performance and throughput of the system AWS provides a wide range of instance classes including z1d Instances delivering a sustained all core frequency of up to 40 gigah ertz (GHz) the fastest of any cloud instance Using such Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 2 high clock frequency instances for the application tier can help reduce the number of cores needed to run the same workload This means you can get the same performance using a smaller instance clas s This makes AWS a highly suitable public cloud environment for running JD Edwards applications with high performance and throughput requirement AWS Support provides a mix of tools and technology p eople and programs designed to proactively help you optimize performance lower costs and innovate faster With core technological capabilities for running high performance JD Edwards deployments combined with a strong support framework AWS provides a g reat experience for customers as a preferred choice for hosting their JD Edwards implementations Amazon RDS for Oracle is a great fit for JD Edwards EnterpriseOne JD Edwards EnterpriseOne also provides for heterogeneous database support which means that there is a loose coupling between enterprise resource planning (ERP) and the database allowing i nstallation of Microsoft SQL Server for example as an alternative to Oracle Licensing Purchase of JD Edwards EnterpriseOne includes the Oracle Technology Foundation component The Oracle Techno logy Foundation for JD Edwards EnterpriseOne provides all the software components you need to run Oracle’s JD Edwards EnterpriseOne applications Designed to help reduce integration and support costs it’s a complete package of the following integrated ope n standards software products that enable you to easily implement and maintain your JD Edwards EnterpriseOne applications: • Oracle Database • Oracle Fusion Middleware • JD Edwards EnterpriseOne Tools If you have these licenses you can take advantage of the A mazon RDS for Oracle Bring Your OwnLicense (BYOL) option See the Oracle Cloud Licensing Policy for details Note: With the BYOL option you may need to acquire addition al licenses for standby database instances when running Multi AZ deployments See the JD Edwards EnterpriseOne Licensing Information User Manual for a detailed description of the restricted use licenses provided in the Oracle Technology Foundation for the JD Edwards EnterpriseOne product Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 3 Some historical JD Edwards EnterpriseOne licensing agreements do not include Oracle Technology Foundation If that is the case for you you can choose the A mazon RDS “License Included” option which includes licensing costs in the hourly price of the service If you have questions about any of your licensing obligations contact your JD Edwards EnterpriseOne licensing representative For details about licensi ng Oracle Database on AWS see the Oracle Cloud Licensing Policy Performance management Instance sizing Increasing the performance of a database (DB) instance requires an understanding of which server resource is causing the performance constraint If database performance is limited by CPU memory or network throughput you can scale up by choosing a larger instance type In an Amazon RDS environment this type of scali ng is simple Amazon RDS supports several DB instance types At the time of this writing instance types that support the Standard Edition 2 (SE2) socket requirements range from: • The burstable “small” ( dbt3small ) • The latest generation general purpose dbm54xlarge which features 16vCPU 64 gigabytes (GB) of memory and up to 10 billions of bits per second (Gbps) of network performance • The latest generation memory optimized dbr54xlarge with 16 vCPU 128 GB of memory and up to 10 Gbps of network performance • The latest generation memory optimized DB instance class dbz1d3xlarge with a sustained all core frequency of up to 40 GHz 12 vCPUs 96 GB memory and up to 10Gbps network perf ormance • The latest generation memory optimized DB instance class dbx1e4xlarge with very high memory to vCPU ratio 16 vCPUs 488 GB memory and up to 10Gbps network performance For current available instance classes and options see the DB instance class support for Oracle The first time you start your Amazon RDS DB instance choose the instance type that seems most relevant in terms of the number of cores and amount of memory you are using With that as the starting point you can then monitor the performance to determine whether it’s a good fit or whether you need to pick a larger or smaller instance type Amazon Web Services Installing JD Edwards Ente rpriseOne on Amazon RDS for Oracle 4 You can modify the instance class for your Amazon RDS DB instance by using the AWS Management Console or the AWS command line interface (AWS CLI) or by making application programming interface (API) calls in applications written with the AWS Software Development Kit (SDK) Modifying the instance class will cause a restart of your DB instance which you can set to occur right away or during the next weekly maintenance window that you specify when creating the instance (Note that the weekly maintenance window setting can als o be changed) Increasing instance storage size Amazon RDS enables you to scale up your storage without restarting the instance or interrupting active processes The main reason to increase the Amazon RDS storage size is to accommodate database growth but you can also do this to improve input/output (I/O) For an existing DB instance with gp2 EBS volumes you might observe some I/O capacity improvement if you scale up your storage Scaling storage capacity can be done manually or you can set up autoscalin g for storage For details on RDS storage management see Working with Storage for Amazon RDS DB Instances Disk I/O management —provisioned IOPS Provisioned I/O operations per second (IOPS) is a storage option that gives you control over your database storage performance by enabling you to specify your IOPS rate Provisioned IOPS is desig ned to deliver fast predictable and consistent I/O performance At the time of this writing you can provision up to 80000 maximum IOPS per instance for EBSoptimized instance classes The maximum storage size supported in an instance is 64 tebibytes (TiB) Here are some important points about Provisioned IOPS in Amazon RDS: • The maximum ratio of Provisioned IOPS to requested volume size (in GiB) is 50:1 For example a 100 GiB volume can be provisioned with up to 5000 IOPS • If you are using Provisioned IOPS storage AWS recommend s that you use DB instance types that are optimized for Provisioned IOPS You can also convert a DB instance that uses standard storage to use Provisioned IOPS storage • The actual amount of your I/O throughput can vary depending on your workload High availability The Oracle database provides a variety of features to enhance the availability of your databases You can use the following Oracle Flashback technology f eatures in both Amazon RDS and in Amazon EC2 which support multiple types of data recovery: Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 5 • Flashback Transaction Query enables you to see all the changes made by a specific transaction • Flashback Query enables you to query any data at some point in time in the past In addition to these features design a database architecture that protects you against hardware failures data center problems and disasters You can do this by using replication technologies and the high availability features of Amazon RDS described in the following section High availability features of Amazon RDS Amazon RDS makes it simple to create a high availability architecture First in the event of a hardware failure Amazon RDS automatically replaces the compute instance powering y our deployment Second Amazon RDS supports Multi AZ deployments where a secondary (or standby) Oracle DB instance is provisioned in a different Availability Zone (location) within the same region This architecture allows the database to survive a failur e of the primary DB instance network and storage or even of the Availability Zone The replication between the two Oracle DB instances is synchronous helping to ensure that all data written to disk on the primary instance is replicated to the standby instance This feature is available for all editions of Oracle including the ones that do not include Oracle Data Guard providing you with out ofthebox high availability at a very competitive cost For details about high availability features in RDS fo r Oracle see Amazon RDS Multi AZ Deployments The following figure shows an example of a high availability architecture in Amazon RDS High availability architecture in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Ama zon RDS for Oracle 6 You should also deploy the rest of the application stack including application and web servers in at least two Availability Zones to ensure that your applications continue to operate in the event of an Availability Zone failure In the design of your high availabi lity implementation you can also use Elastic Load Balancing which automatically distributes the load across application servers in multiple Availability Zones A failover to the standby DB instan ce typically takes between one and three minutes and will occur in any of the following events: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB instance • Compute unit failure on the primary DB instance • Storage failure on the primary DB instance • Scaling of the compute class of your DB instance either up or down • System maintenance such as hardware replacement or operating system upgrades Running Amazon RDS in multiple Availability Zones has additional bene fits: • The Amazon RDS daily backups are taken from the standby DB instance which means that there is usually no I/O impact to your primary DB instance • When you need to patch the operating system or replace the compute instance updates are applied to the standby DB instance first When complete the standby DB instance is promoted as the new primary DB instance The availability impact is limited to the failover time resulting in a shorter maintenance window Oracle security in Amazon RDS Amazon RDS enables you to control network access to your DB instances using security groups By default network access is limited to other hosts in the Amazon Virtual Private Cloud (Amazon VPC) where your instance is deployed Using AWS Identity and Access Management (AWS IAM) you can manage access to your Amazon RDS DB instances For example you can authorize (or deny) administrative users under your AWS Account to creat e describe modify or delete an Amazon RDS DB instance You can also enforce multi factor authentication (MFA) For more information about using IAM to manage administrative access to Amazon RDS see Identity and access management in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 7 Amazon RDS offers optional storage encryption that uses AES 256 encryption and automatically encrypts any snapshots and snapshot restores You can control who can decrypt your data by using AWS Key Management Service (AWS KMS) In addition Amazon RDS supports several Oracle Database security features: • Amazon RDS can protect data in motion using Secure Sockets Layer (SSL) or native network encryption that protects data in motion using Oracle Net Services You can choose between AES Triple DES and RC4 encryption • You can also store database credentials using AWS Secrets Manager Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance Installing JD Edwards EnterpriseOne is often seen as a complex task that involves setting up a server manager and the JD Edwards EnterpriseOne deployment server followed by installing the platform pack In this section you will learn an alternative process for installing the platform pack which is tailored to ensure a successful installation of JD Edwards EnterpriseOne on an Amazon RDS for Oracle database instance (referred to from this point on as an Oracle DB instance) Prerequisites To install J D Edwards EnterpriseOne on Amazon RDS for Oracle: • You should be familiar with the JD Edwards EnterpriseOne installation process and have an understanding of the fundamentals of AWS architecture • You should have a functional AWS account with appropriate IAM permissions • You should have created an Amazon VPC with associated Subnet Groups and Security Groups and it is ready for use by the Amazon RDS for Oracle service • You should have a local database on your deployment server that you can connect to with Oracle SQL Developer Note: The deployment server will have two separate sets of Oracle binaries: a 32 bit client and a 64 bit server engine (named e1local ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Orac le 8 Preparation The proc ess described in this whitepaper is based on the standard JD Edwards EnterpriseOne installation processes which are described in the JD Edwards EnterpriseOne Applications Installation Guide Prior to continuing follow the instructions in the JD Edwards EnterpriseOne Applications Installation Guide until section 45 (“Understanding the Oracle Installation” ) When you have completed the steps leading up to section 45 follow the rest o f the instructions in this whitepaper to successfully install JD Edwards EnterpriseOne on an Oracle DB instance Key installation tasks The key elements of installing JD Edwards EnterpriseOne on an Oracle DB instance include: • Creating the instance • Configur ing the SQL *Plus Instant Client • Installing the platform pack • Modifying the original installation scripts that are provided Creating your Oracle DB instance Using the AWS Management Console follow these steps 1 From the top menu bar choose Services 2 Choose Database > RDS This opens the Amazon RDS dashboard where you will create your Oracle DB instance 3 Choose Create data base 4 To create an Oracle SE2 environment from the Create database screen do the following: a Under database creation method choose Standard Create b Under Engine options choose Oracle 5 Under Edition choose Oracle Standard Edition Two 6 Under Version choose the latest quarterly release of Oracle Database 19c (which is 19000ru 2020 04rur 2020 04r1 at the time of this publication) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 9 7 Under License choose bring your ownlicense Oraclese2 must be used in compliance with the latest Oracle l icensing Contact Oracle should further information be required 8 Under Templates choose Production (AWS Management Console recommends using the default values for a production ready environment or a development environment For the purposes of this white paper you will use a production environment) 9 Under Settings enter the configuration details for the database instance and credentials For this example use the following information: • DB Instance Identifier — jde92poc • Master Username — jde92pocMaster • Master Password — jde92pocMasterPassword 10 Under DB instance size choose Memory Optimized classes (includes r and x classes) 11 From the dropdown menu choose db r5xlarge 12 Under Storage : a For Storage type choose General Purpose (SSD) b For Allocated storag e choose 150 GiB c Select (check) Enable storage autoscaling d For Maximum storage threshold select of 500 GiB For the purposes of this example use the settings mentioned above in step 5 steps 8 and 9 and step 10 to choose the Oracle version instance class and storage respectively These settings can be tailored to meet your specific requirements AWS encourage s you to consult with a JD Edwards EnterpriseOne supplier to ensure these settings are appropriate for your specific use case 13 Under Availability & durability choose Create a standby instance (recommended for production usage) 14 Under Connectivity use the preconfigured VPC (JDE92) and the settings shown in the following figure If you have appropriately configured Subnet Groups and VPC Security Groups you can use them here Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 10 Configure network and security settings Note: The rest of this procedur e assumes that you have already created a VPC to accommodate the Amazon RDS for Oracle installation and that the VPC name used is JDE92 If you need help see VPC documentation 15 Under Database authentication options choose Password authentication 16 Expand the Additional configuration section for Database options enter the following settings: • Initial database name — jde92poc • DB par ameter group — defaultoracle se219 • Option group — defaultoracle se219 • Character set — WE8MSWIN1252 17 For the Backup Encryption and Performance Insights sections use the default settings for this example However because these settings do not impact the ability to install JD Edwards EnterpriseOne AWS encourage s you to experiment with and test these settings in your actual implementation 18 Under Monitori ng choose Enable Enhanced monitoring a For Granularity choose 15 seconds b For Monitoring Role select default c Under Log exports choose Alert log Listener log and Trace log d For Maintenance and Deletion Protection select the defaults Because these settings do not impact the ability to install JD Edwards EnterpriseOne you should experiment with and test these settings 19 Click Create database to create the RDS Oracle instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 11 Creation of the Oracle DB instance begins This can take some time to comp lete Search for your instance to view the progress Click the refresh icon to watch the progress of the Oracle DB instance creation Refreshing the progress view When the Oracle DB instance is available for use the Status changes to available Connecting to your Oracle DB instance When Amazon RDS creates the Oracle DB instance it also creates an endpoint Using this endpoint you can construct the connection string required to connect directly with your Oracle DB instance To allow network requests to your running Oracle DB instan ce you will need to authorize access For a detailed explanation of how to construct your connection string and get started see the Amazon RDS User Guide Endpoint for the Oracle DB instance The endpoint is allocated a Domain Name System (DNS) entry which you can use for connecting However to facilitate a better instal lation experience for JD Edwards EnterpriseOne a CNAME record is created so the endpoint can be more human readable The CNAME should be created in the Amazon Route 53 local internal zone and should point t o the new Oracle DB instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 12 Note: Creating an Amazon Route 53 record set is beyond the scope of this document For more assistance see the Amazo n Route53 User Guide As shown in the following figure you are creating a simple record called jde2poc You provide the RDS instance's endpoint in the Value/Route traffic to section CNAME record set To ensure that connectivity is permitted from the internal subnets in both Availability Zones you will need to edit the security group for the Ora cle DB instance As shown in the following figure you have added an oraclerds inbound rule that is allowing connectivity from our internal IP (source) to the RDS instance Updating the security group Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 13 Configure SQL Developer Oracle SQL Developer i s used to validate that the appropriate connectivity and permissions are in place and that the Oracle DB instance is accessible SQL Developer is installed by default with your Oracle client Optionally however see SQL Developer 1921 Downloads to download a standalone version of SQL Developer The configuration information used to create the Oracle DB instance will be used as the SQL Developer con figuration parameters that are required to connect to the Oracle DB instance 1 In the New/Select Database Connection dialog box choose Test to perform a test connection to the Oracle DB instance A status of Success indicates that the test connection has run and successfully connected to the Oracle DB instance At this point connectivity to both e1local and jde92poc has been proven using the default 64 bit drivers supplied with SQL Developer Note: The 64 bit driver is selected by default based on the order of the client drivers in the Servers environment variable 2 To check the deployment server path variables in File Explorer (assuming Microsoft Windows 10) right click This PC and choose Properties 3 On the Advanced tab choose Environment Variables 4 Locate the Path environment system variable in the list Path s ystem variable Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 14 This enables the observation of the Path environment system variable The following example shows the 64 bit binaries listed before the 32 bit binaries for Oracle C:\JDEdwards \E920_1\PLANNER\bin32;C: \JDEdwards \E920_1\system\bin32; C:\Oracle64db\E1Local\bin;C:\app\e1dbuser \product\1210\client_1 \b in;C:\ProgramData \Oracle\Java\javapath;%SystemRoot% \system32;%Syste mRoot%;%SystemRoot% \System32 \Wbem;%SYSTEMROOT% \System32 \WindowsPowe rShell\v10\;C:\ProgramFiles \Amazon\cfnbootstrap \;C:\Program Files\Amazon\AWSCLI\” 5 To ensure that the remainder of the installation process works it is critical that SQL*Plus works correctly ; specifically name resolution with tnsnamesora From the deployment server EC2 instance open a command window and enter the following command: tnsping ellocal The file used for tnsping is located in the C:\Oracle64db \E1Local\network\admin folder In this directory you’ll make changes to the tnsnamesora file; specifically configuration of the e1local database (64 bit installat ion) 6 This step relates to the 64 bit libraries not to the libraries that the JD Edwards EnterpriseOne deployment server code uses The JD Edwards EnterpriseOne deployment server code uses 32 bit executables and the tnsnamesora file on the client side to connect to databases (which are 64bit) For this example these files are located in C:\app\e1dbuser\product\1210\client_1\network\admin Ensure that the Oracle DB instance is in the tnsnamesora file in both locations (32bit and 64 bit) To proceed you must be able to log into SQL*Plus to the Oracle DB instance using tnsnamesora Installing the platform pack The platform pack is run from the deployment server connecting to a remote database To proceed you need the Oracle Platform Pack for Windows You can obtain it from https://edeliveryoraclecom with the appropriate MOS (My Oracle Support) login Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 15 In this section the installation directory is C:\software\windowsPlatformPack \install To in stall the platform pack: 1 To run the Java based installation program for the Oracle Platform Pack for Windows run setupexe from within the installation directory 2 Choose Next 3 Under Select Installation Type choose Database and then choose Next 4 Under Specify Home Destination > Destination Leave the Name field as the default Under Path choose where to locate the installer files based on the installation preferences This is a temporary location and you can remove these files after the database is populated After you enter the file path choose Next 5 Under Would you like to Install or Upgrade EnterpriseOne choose Install and then choose Next 6 Under Database Options enter database information: a Database type — Oracle b Database server — The database server name is not important and you can use the name of the deployment server (in this case jde92dep ) c Enter and confirm your password d Choose Next 7 Under Administration and End User Roles use the defaults and choose Next 8 A warning appears Ignore it and choose Next Ignore the Database Server name warning Configuration for the Oracle DB instance and a username and password are supplied on the form Unique string identifiers are provided for the tablespace directory (c:\tablespace001 ) and the Index tablespace directory (c:\indexspace001 ) These will be replaced at a later stage of the installation process 9 Choose Run Scripts Manually to defer the execution of the installation scripts Important : Should the installation s cripts run at this stage the installation will fail Choose Next The installation process will attempt to connect to jde92poc using the information you provided This connection must succeed for the installation to proceed The following figure indicates that the installation process was able to connect to the Oracle DB instance specified ( jde92poc ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 16 Installation process connected to the Oracle DB instance 10 Choose Install The installation process starts and creates a set of specific database installati on scripts for the options selected throughout the platform pack installation wizard When installation is complete instead of the default scripts the custom values you provided are configured Because you selected Run Scripts Manually the database is not loaded but scripts are created specifically for the current input parameters As the installation process proceeds you can view logging at C:\JDEdwardsPPack \E920_1 Modifying the default scripts After modifying the default scripts the post installati on wizard installation scripts are created; however it is assumed that they will run on the database server itself As a result you need to modify these scripts to ensure a seamless installation on the Oracle DB instance When you view the specified inst allation directory ( C:\JDEdwardsPPack \E920_1 ) you will see that a folder structure was created You will make the required modifications within this directory Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 17 Folder structure for the installation directory The modifications required to achieve a seamless installation are summarized as follows : • Change the dpump_dir1 entry in all scripts to DATA_PUMP_DIR The Data Pump files need to be moved from the various directories on the deployment server install media to t he DATA_PUMP_DIR directory on the RDS DB instance using DBMS_FILE_TRANSFERPUT_FILE You can also use the Amazon S3 integration feature now available with RD S Oracle to move the dump file For details see Integrating Amazon RDS for Oracle with Amazon S3 using S3_integration and Image X • Change the syntax of the CREATE TABLESPACE statements Amazon RDS supports Oracle Managed Files (OMF) only for data files log files and control files When creating data files and log files you cannot specify physical file names See Changing the Syntax of the CREATE TABLESPACE Statements in this document for additional details • Rename the pristine data dump file and the import data script Change the name of the pristine data dump file and also the import data script for the TEST environment and pristine environment (The standard scripts change the import DIR and you are going to change the filename) • Change the database grants Change the database grants to remove “create any directory” as this is not a grant that works on Amazon RDS See Changing the Database Grants in this document for additional details Throughout this process the updated scripts are located in the ORCL directory You can run these scripts at any time by executing the following command However this is the master script for the database installation and you should NOT run it at this stage Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 18 cmd> InstallOracleDatabaseBAT If throughout this process you make any mistakes or encounter failures run the following command This command completely unloads and drops any database components that were created by the installation script cmd> drop_dbbat You should back up all the scripts in the ORCL directory If required you can run the installer again to generate a set of new pristine scripts Create the JDE Installers standard data pump directories From SQL Developer connected to the Amazon RDS for Oracle database instance perform the following steps The Windows global search and replace commands were completed using notepad++ However you can use any text editor Changing dpump_dir1 Use the global search and replace for *sql and *bat files in t he c:\JDEdwardsPPack \E920_1\ORCL directory : • Replace dpump_dir_1 with DATA_PUMP_DIR • Replace log_dir1 with DATA_PUMP_DIR Find and replace the *sql and *bat files Amazon Web Services Installing JD Edwa rds EnterpriseOne on Amazon RDS for Oracle 19 Now create the datapump directories ‘ log_dir1’ and ‘dpump_dir1’ as shown: Sqldeveloper> exe c rdsadminrdsadmin_utilcreate_directory('log_dir1'); Sqldeveloper> exec rdsadminrdsadmin_utilcreate_directory('dpump_dir1'); • Confirmation messages such as anonymous block completed are displayed; you can safely ignore them • You can confirm that the directory was created by running the following SQL statement : SELECT directory_name directory_path FROM dba_directories; After replacing the sql and bat files the code output changes For example this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY= dpump_dir1 DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE= log_dir1 :Import_%USER%log TABLE_EXISTS_ACTION=TR UNCATE EXCLUDE=USER Becomes this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY=DATA_PUMP_DIR DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE=DATA_PUMP_DIR:Import_%USER%log TABLE_EXISTS_ACTION=TRUNCATE EXC LUDE=USER Changing the syntax of the CREATE TABLESPACE statements By default pristine create tablespace statements found in the files such as crtabsp_cont crtabsp_shnt and crtabsp_envnt look like the following example CREATE TABLESPACE &&PATH&&RELEASEt Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 20 logging datafile '&&TABLE_PATH \&&PATH&&RELEASEt01dbf' size 1500M '&&TABLE_PATH \&&PATH&&RELEASEt02dbf' size 1500M autoextend on next 60M maxsize 5000M extent management local autoallocate segment space management auto online; These statements must be modified to reflect the following example CREATE bigfile TABLESPACE &&PATH&&RELEASEt logging Datafile SIZE 1500M AUTOEXTEND ON MAXSIZE 5G; Note: The next step of applying updates is either a manual or a scripted task due to differences in many of the tablespaces The following updates must be applied crtabsp_cont create bigfile tablespace &&PATH&&RELEASEt logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; create bigfile tablespace &&PATH&&RELEASEi logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; crtabsp_shnt create bigfile tablespace sy&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 750M; create bigfile tablespace sy&&RELEASEi logging datafile size 100M AUT OEXTEND ON MAXSIZE 750M; create bigfile tablespace svm&&RELEASEt logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; Amazon Web Services Installing JD Edwards EnterpriseOn e on Amazon RDS for Oracle 21 create bigfile tablespace svm&&RELEASEi logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace ol&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 350M; create bigfile tablespace ol&&RELEASEi logging datafile size 100M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace dd&&RELEASEt logging datafile size 350M AUTOEXTEND ON MAXSIZE 450M; create bigfile tablespace dd&&RELEASEi logging datafile size 125M AUTOEXTEND ON MAXSIZE 750M; crtabsp_envnt create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 100 0M AUTOEXTEND ON MAXSIZE 4500M; Renaming the pristine data dump file and the Import data script These changes are made to ORCL\InstallOracleDatabaseBAT You are changing DTA to DDTA to load the DEMO data as opposed to the empty tables Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 22 approx line 363 PRISTINE @REM @set USER=%PS_DTA_USER% @set PSSWD=%PS_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %USER% Business Data Tables @echo @echo "Calling Load for %PS_DTA_USER% load type DTA" >> logs\OracleStatustxt @echo "InstallOracleDatabase:#6 call load %PS_DTA_USER% DTA T STDTA @callLoadbat @if ERRORLEVEL 4 ( @goto abend approx line 554 – TESTDTA @rem @if "%RUN_MODE"=="INSTALL"( @set user=%DV_DTA_USER% @set PSSWD=%DV_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %DV_DTA_USER% Business Data Tables @echo @echo "Calling Load for %DV_DTA_USER%load type DTA" >>logs\OracleStatustxt @call Loadbat @if ERRORLEVEL 4( @goto abend Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 23 Changing the database grants Create_dirsql has the following statement that you need to change Amazon RDS for Oracle does not support creating directories on the RDS instance so you must remove this statement Before grant create session create table create view create any directory select any dictionary to jde_role; After grant create session create table create view select any dictionary to jde_role; Advanced configuration Start an SQL Developer session to the RDS DB instance and log in as the administrative user ( jde92pocmaster ) Run the following SQL command SELECT directory_name directory_path FROM dba_directories ; This is the result: DIRECTORY_NAME DIRECTORY_PATH BDUMP /rdsdbdata/log/trace ADUMP /rdsdbdata/log/audit OPATCH_LOG_DIR /rdsdbbin/oracle/QOpatch OPATCH_SCRIPT_DIR /rdsdbbin/oracle/QOpatch DATA_PUMP_DIR /rdsdbdata /datapump OPATCH_INST_DIR /rdsdbbin/oracle/Opatch LOG_DIR1 /rdsdbdata/userdirs/01 DPUMP_DIR1 /rdsdbdata/userdirs/02 To see files in DATA_PUMP_DIR1 directory run the following Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 24 SELECT * FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DATA_PUMP_DIR 1’))ORDER BY mtime; SELECT * FROM TABLE (RDSADMINRDS_FILE_UTIL LISTDIR(‘LOG_DIR1’ )) ORDER BY mtime; The following command deletes a single file named Import_TESTCTL_CTLlog from the LOG_DIR1 directory stored on the Oracle DB instance exec utl_fileremove(‘LOG_DIR1’’Import_TESTCTL_CTLlog’); exec utl_filefremove('DATA_PUMP_DIR''Import_TESTCTL_CTLlog'); The DATA_PUMP_DIR is used in the following SQL command to generate deletes for all log files in LOG_DIR1DATA_PUMP_DIR SELECT ’exec utl_filefremove (‘DATA_PUMP_DIR ’’’’’|| filename|| ‘’’);’ FROM TABLE (RDSMDMINRDS_FILE_UTILLISTDIR (‘LOG_DIR1 ’)) WHERE filename LIKE ‘%log’ ORDER BY mtime; Moving DMP files When connected to e1local on the deployment server using SQL Developer run the following commands DROP DATABASE LINK jde92poc; CREATE DATABASE LINK jde92poc CONNECT TO jde92pocmaster IDENTIFIED BY "aws_Poc_Password" USING'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=jde92pocjde92 loca l)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=jde92po c)))'; SELECT directory_name directory_path FROM dba_directories; 'C:\Oracle64db \admin\e1local\dpdump'; These commands create the following: • A new database directory to read the dump files from the deployment server Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 25 • A database link to the Amazon RDS for Oracle DB instance to be a conduit to move the dump files from the deployment server to the Oracle DB instance Copying DMP files from an ORCL directory to a specified DATA_PUMP directory Locate *dmp files in the ORCL directory and copy them to C:\Oracle64db \admin\e1local\dpdump as defined in the previous e1local database directory ( DATA_PUMP_SRM ) You'll see that there are two DUMP_DTADMP files in the find results The one in demodta must be renamed DUMP_DDTADMP It’s important to name it exactly as specified because there are associated changes in the import scripts DUMP_DTADMP comes from ORCL\proddta The reason for this renaming is that one of the dump files (the larger one ) is for DEMO data which is imported into TESTDTA and PRISTINE while the smaller file (DUMP_DTADMP ) does not contain any data – just table and index structures Now all of the *dmp files that must be copied into the Oracle DB instance are in an e1local directory named DATA_PUMP_SRM It’s time to move these files to the RDS DB instance directory named DPUMP_DIR1 that you created The following figure shows how this directory looks on the deployment server DPUMP_DIR1 directory on deployment server In the Appendix you will find a script you can use to copy the dmp files from the deployment server to the RDS DB instance via a database link Run this script from SQL Developer connected to the e1local database Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 26 When these commands finish successfully you can run the following command against the Oracle DB instance ( jde92poc ) to ensure that the files have arrived SELECT substr(filename130)type filesize MTIME FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DPUMP_DIR1')) ORDER BY mtime; The following output indicates that the files were transferred correctly A screens hot that shows the files were transferred correctly Confirming files are transferred : create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M ; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 27 Change the database grants to not include ‘create any directory’ Because Amazon RDS Oracle does not support creating directories on the RDS instance the creation of directories in the installation scripts must be done manually You do this by using the AW S custom function rdsadminrdsadmin_utilcreate_directory Grants before Grant create session create table create view create any directory select any dictionary to jde_role; Grants after Grant create session create table create view select any dictionary to jde_role; Running the installer At this point you have made all the modifications that are required to facilitate the smooth installation of JD Edwards EnterpriseOne If you encounter any issues be sure that anything you defined in the installation wizard is also defined in ORCL\ORCL_SETBAT If you forget items such as passwords or settings you can retrieve them from this file However be sure to delete this file wh en the installation is complete Open a command window on the deployment server and run InstallOracleDatabasebat from the C: \JDEdwardsPPack \E920_1\ORCL directory You can use C:\JDEdwardsPPack \E920_1\ORCL\logs to track progress and view the script output You cannot view the output of the data pump operations because they are not multiples of the block size of the database When the installation is complete you should see that the database is populated The following screenshot is from Oracle SQL Develop er and shows you the properties of the target database All JD Edwards EnterpriseOne tablespaces now have space allocated and tables created Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 28 Properties of the target database You’ve now completed all the tasks for installing JD Edwards EnterpriseOne on the Amazon RDS Oracle DB instance The following steps enable you to verify that you can connect to the populated instance Logging into JD Edwards EnterpriseOne on the deployment server 1 Click the application launch icon to start JD Edwards EnterpriseOne The JD Edwards EnterpriseOne login screen is displayed 2 Enter your UserID and password 3 For Environment enter DV920 4 For Role enter *ALL Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 29 Logging in to DV920 for testing 5 Log out and then log back in to the jdeplan environment and continue with the standard installation Because there are no further deviations from a standard installation beyond this point you can proceed to create an installation plan and run the installation workbench Follow the instructions in section 5 of the JD Edwards EnterpriseOne installation process “ Working with Installation Planner for an Install ” Validation and testing The s uccessful completion of the installation workbench will give you confidence that the Amazon RDS Oracle database installation is working Proceeding to install web servers and enterprise servers and connecting them to the Amazon RDS for Oracle DB instance a re some of the remaining installation steps Remember to delete the dmp files on the Amazon RDS instance to ensure that they do not contribute to the amount of storage you are using on the Amazon RDS instance Any files stored in database directories con tribute to the space you are using in the Amazon RDS instance Use the following statement to build the commands you need to run to delete the dmp files Run this statement only when you know that your installation succeeded SELECT 'exec utl_filefremove(''DPUMP_DIR1'''''||filename|| ''');' FROM table(RDSADMINRDS_FILE_UTILLISTDIR('DPUMP_DIR1')) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 30 WHERE filename LIKE '%DMP' ORDER BY mtime; Running on Amazon RDS for Oracle Enterprise Edition This paper walks through the implementation of J D Edwards on Amazon RDS for Oracle standard edition only However if you are running or plan to run on Amazon RDS for Oracle Enterprise Edition there are some additional features you can leverage in the areas of high availability and security • Flashback Table recovers tables to a specific point in time This can be helpful when a logical corruption is limited to one table or a set of tables instead o f to the entire database At the time of this publication the Flashback Database feature is available only on self managed Oracle databased on Amazon EC2 and not in Amazon RDS for Ora cle • Transparent Data Encryption (TDE) protects data at rest for customers who have purchased the Oracle Advanced Security option TDE provides transparent encryption of stored data to support your privacy and compliance efforts Applications do not have to be modified and will continue to work as before Data is automatically encrypted before it is written to disk and autom atically decrypted when reading from storage Key management is built in which eliminates the task of creating managing and securing encryption keys You can choose to encrypt tablespaces or specific table columns using industry standard encryption algorithms including Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) • Oracle Virtual Private Database (VPD) enables you to create security polici es to control database access at the row and column level Essentially Oracle VPD adds a dynamic WHERE clause to an SQL statement that is issued against the table view or synonym to which an Oracle VPD security policy was applied Oracle VPD enforces se curity to a fine level of granularity directly on database tables views or synonyms Because you attach security policies directly to these database objects and the policies are automatically applied whenever a user accesses data there is no way to bypa ss security • Fine Grained Auditing (FGA) can be understood as policy based auditing It enables you to specify the conditions necessary to generate an audit record FGA p olicies are programmatically bound to a table or view They allow you to audit an event only when conditions that you define are true; for example only if a specific column has been selected or updated Because every access to a table is not always record ed this creates more meaningful audit trails This can be critical given the often commercially sensitive nature of the data retained in the JD Edwards EnterpriseOne backend databases Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 31 As dbz1d instances class delivers a sustained all core frequency of u p to 40 GHz the fastest of any cloud instance; this can also reduce the costs for customers using core based licensing cost while running enterprise edition since they will need to have fewer cores now Conclusion This whitepaper described many of the ca pabilities and advantages of using AWS and Amazon RDS as the foundation for installing the JD Edwards EnterpriseOne application Specifically this whitepaper focused on a way of configuring Amazon RDS for Oracle as the underlying database for the JD Edwar ds EnterpriseOne application The whitepaper articulated all the steps for installing the JD Edwards EnterpriseOne application and the steps required to set up an Amazon RDS Oracle DB instance Having JD Edwards EnterpriseOne and Amazon RDS for Oracle run ning in the AWS Cloud enables you to enjoy the advantages of simple deployment high availability security scalability and many additional services supported by Amazon RDS and AWS Appendix: Dumping deployment service to RDS The following code snippet shows example usage of DBMS_FILE_TRANSFER package to transfer the datapump dumpfile for deployment service to RDS Oracle Begin DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_CTLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_CTLDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC01DMP' destination_directory_ob ject=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC01DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC02DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC02DMP' Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 32 destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC03DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> ' RDBSPEC03DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC04DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC04DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DTADMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_OLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_OLDMP' destination_database=> 'jde 92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_SYDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_SYDMP' destination_database=> 'jde92poc' ); Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 33 DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDTADMP' destination_database=> 'jde92poc' ); END; Contributors Contributors to this document include: •Marc Teichtahl AWS Solutions Architect •Shannon Moir Lead Engineer at Myriad IT •Saikat Banerjee Database Solutions Architect AWS Document revisions Date Description March 24 2021 Document review and addition of various new RDS Oracle capabilities Dec 2016 First publication
|
General
|
consultant
|
Best Practices
|
Integrating_AWS_with_Multiprotocol_Label_Switching
|
Integrating AWS with Multiprotocol Label Switching December 2016 This paper has been archived For the latest technical content on this subject see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Why Integrate with AWS? 1 Introduction to MPLS and Managed MPLS Services 2 Overview of AWS Networking Services and Core Technologies 3 Amazon VPC 3 AWS Direct Connect and VPN 3 Internet Gateway 4 Customer Gateway 5 Virtual Private Gateway and Virtual Routing and Forwarding 5 IP Addressing 5 BGP Protocol Overview 6 Autonomous System 6 AWS APN Partners – Direct Connect as a Service 8 Colocation with AWS Direct Connect 9 Benefits 9 Considerations 10 Architecture Scenarios 10 MPLS Architecture Scenarios 14 Scenario 1: MPLS Connectivity over a Single Circuit 14 Scenario 2: Dual MPLS Connectivity to a Single Region 22 Conclusion 28 Contributors 28 Further Reading 28 Notes 29 ArchivedAbstract This whitepaper outlines highavailability architectural best practices for customers who are considering integration between Amazon Virtual Private Cloud (Amazon VPC) in one or more regions with their existing Multiprotocol Label Switching (MPLS) network The whitepaper provides best practices for connecting single and/or multiregional configurations with your MPLS provider It also describes how customers can incorporate VPN backup for each of their remote offices to maintain connectivity to AWS Regions in the event of a network or MPLS outage The target audience of this whitepaper includes technology decision makers network architects and network engineers ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 1 Introduction Many midsized to largesized enterprises leverage Multiprotocol Label Switching (MPLS) services for their Wide Area Network (WAN) connection As cloud adoption increases companies seek ways to integrate AWS with their existing MPLS infrastructure in a costeffective way without redesigning their WAN architecture Companies want a flexible and scalable solution to bridge current onpremises data center workloads and their cloud infrastructure They also want to provide a seamless transition or extension between the cloud and their onpremises data center Why Integrate with AWS? There are a number of compelling business reasons to integrate AWS into your existing MPLS infrastructure: Business continuity One of the benefits of adopting AWS is the ease of building highly available geographically separated workloads By integrating your existing MPLS network you can take advantage of native benefits of the cloud such as global disaster recovery and elastic scalability without losing any of your current architectural implementations standards and best practices User data availability By keeping data closer to your users your company can improve workload performance customer satisfaction as well as meet regional compliance requirements Mergers & acquisitions During mergers and acquisitions your company can realize synergies and improvements in IT services very quickly by moving acquired workloads into the AWS Cloud By integrating AWS into MPLS your company has the ability to: o Minimize or avoid costly and serviceimpacting data center expansion projects that can require either the relocation or purchase of technology assets o Migrate workloads into Amazon Virtual Private Cloud (Amazon VPC) to realize financial synergies very quickly while developing longerterm transformational initiatives to finalize the acquisition ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 2 To accomplish this companies can design their network with AWS to do the following: Enable seamless transition of the acquired remote offices and data centers with AWS by connecting the newly acquired MPLS network to AWS Simplify the migration of workloads from the acquired data center into an isolated Amazon VPC while maintaining connectivity to existing AWS workloads Optimize availability and resiliency Enterprise customers who want to maximize availability and performance by using one or more WAN/MPLS solutions are able to continue with the same level of availability by peering with AWS in multiple faultisolated regions This whitepaper highlight s several options you have as a mid tolarge scale enterprise to cost effectively migrate and launch new services in AWS without overhauling and redesigning your current MPLS/WAN architecture Introduction to MPLS and Managed MPLS Services MPLS is an encapsulation protocol used in many service provider and large scale enterprise networks Instead of relying on IP lookups to discover a viable "nexthop" at every single router within a path (as in traditional IP networking) MPLS predetermines the path and uses a label swapping push pop and swap method to direct the traffic to its destination This gives the operator significantly more flexibility and enables users to experience a greater SLA by reducing latency and jitter For a simple overview of MPLS basics see RFC3031 Many service providers offer a managed MPLS solution that can be provisioned as Layer 3 (IPbased) or Layer 2 (single broadcast domain) to provide a logical extension of a customer’s network When referring to MPLS in this document we are referring to the service providers managed MPLS/WAN solution See the following RFCs for an overview on some of the most common MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 3 solutions: L3VPN: https://toolsietforg/html/rfc4364 (obsoletes RFC 2547) L2VPN (BGP): https://toolsietforg/html/rfc6624 Pseudowire (LDP): https://toolsietforg/html/rfc4447 Although AWS does not natively integrate with MPLS as a protocol we provide mechanisms and best practices to connect to your currently deployed MPLS/WAN via AWS Direct Connect and VPN Overview of AWS Networking Services and Core Technologies We want to provide a brief overview of the key AWS services and core technologies discussed in this whitepaper Although we assume you have some familiarity with these AWS networking concepts we have provided links to more indepth information Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network dedicated to your AWS account1 Within Amazon VPC you can launch AWS resources and define your IP addressing scheme This includes your subnet ranges routing table constructs network gateways and security setting Your VPC is a security boundary within the AWS multitenant infrastructure that isolates communication to only the resources that you manage and support AWS Direct Connect and VPN You can connect to your Amazon VPC over the Internet via a VPN connection by using any IPsec/IKEcompliant platform (eg routers or firewalls) You can set up a statically routed VPN connection to your firewall or a dynamically routed VPN connection to an onpremises router To learn more about setting up a VPN connection see the following resources: http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpn connectionshtml ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 4 https://wwwyoutubecom/watch?v=SMvom9QjkPk Alternatively you can connect to your Amazon VPC by establishing a direct connection using AWS Direct Connect 2 Direct Connect uses dedicated private network connections between your intranet and Amazon VPC Direct Connect currently provides 1G and 10G connections natively and sub1G through Direct Connect Partners At the heart of Direct Connect is your ability to carve out logical virtual connections within the physical direct connect circuit based on the 8021Q VLAN protocol Direct Connect leverage virtual LANs (VLANs) to provide network isolations and enable you to create virtual circuits for different types of communication These logical virtual connections are then associated with virtual interfaces in AWS You can create up to 50 virtual interfaces across your direct connection AWS has a soft limit on the number of virtual interfaces you can create Using Direct Connect you can categorize VLANs that you create as either public virtual interfaces or private virtual interfaces Public virtual interfaces enable you to connect to AWS services that are accessible via public endpoints for example Amazon Simple Storage Service (Amazon S3) Amazon DynamoDB and Amazon CloudFront You can use private virtual interfaces to connect to AWS services that are accessible through private endpoints for example Amazon Elastic Compute Cloud (Amazon EC2) AWS Storage Gateway and your Amazon VPC Each virtual interface needs a VLAN ID interface IP address autonomous system number ( ASN ) and Border Gateway Protocol (BGP) key To learn more about working with Direct Connect virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Internet Gateway An Internet gateway (IGW) is a horizontally scaled redundant and highly available VPC component that allows communication between instances in your VPC and the Internet3 To use your IGW you must explicitly specify a route pointing to the IGW in your routing table ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 5 Customer Gateway A customer gateway (CGW) is the anchor on your side of the connection between your network and your Amazon VPC4 In an MPLS scenario the CGW can be a customer edge (CE) device located at a Direct Connect location or it can be a provider edge (PE) device in an MPLS VPN network For more information on which option best suits your needs see the Colocation section later in this document Virtual Private Gateway and Virtual Routing and Forwarding A virtual private gateway (VGW) is the anchor on the AWS side of the connection between your network and your Amazon VPC This software construct enables you to connect to your Amazon VPCs over an Internet Protocol Security (IPsec) VPN connection or with a direct physical connection You can connect from the CGW to your Amazon VPC using a VGW In addition you can connect from an onpremises router or network to one or more VPCs using a virtual routing and forwarding (VRF) approach5 VRF is a technology that you can use to virtualize a physical routing device to support multiple virtual routing instances These virtual routing instances are isolated and independent AWS recommends that you implement a VRF if you are connecting to multiple VPCs over a direct connection where IP overlapping and duplication may be a concern IP Addressing IP addressing is the bedrock of effective cloud architecture and scalable topologies Properly addressing your Amazon VPC and your internal network enables you to do the following: Define an effective routing policy An effective routing policy enables you to associate adequate governance around what networks your infrastructure can communicate with internally and externally It also enables you to effectively exchange routes between and within domains systems and internal and external entities Have a consistent and predictable routing infrastructure Your network should be predictable and fault tolerant During an outage or a ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 6 network interruption your routing policy ensures that routing changes are resilient and fault tolerant Use resources effectively By controlling the number of routes exchanged across the boundaries you prevent data packets from travelling across the entire network before getting dropped With proper IP addressing only segments with active hosts are propagated while networks without a host do not appear in your routing table This prevents unnecessary data charges when hosts are sending erroneous IP packets to systems that do not exist or that you choose not to communicate with Maintain security By effectively controlling which networks are advertised to and from your VPC you can minimize the impact of targeted denial of service attacks on subnets If these subnets are not defined within your VPC such attacks originating outside of your VPC will not impact your VPC Define a unique network IP address boundary in your VPC Amazon VPC supports IP address allocation by subnets which allows you to segment IP address spaces into defined CIDR ranges between /16 and /28 A benefit of segmentation is that you can sequentially assign hosts into meaningful blocks and segments while conserving your IP address allocations Amazon AWS also supports route summarization which you can use to aggregate your routes to control the number of routes into your VPC from your internal network The largest CIDR supported by Amazon VPC is a /16 You can aggregate your routes up to a /16 when advertising routes to AWS BGP Protocol Overview Autonomous System An autonomous system (AS) is a set of devices or routers sharing a single routing policy that run under a single technical administration An example is your VPC or data center or a vendor’s MPLS network Each AS has an identification number (ASN) that is assigned by an Internet Registry or a provider If you do not have an assigned ASN from the Internet Registry you can request one from your circuit provider (who may be able to allocate an ASN) or choose to assign a Private ASN from the following range: 65412 to 65535 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 7 We recommend that you use Border Gateway Protocol (BGP) as the routing protocol of choice when establishing one or more Direct Connect connections with AWS For more information on why you should use BGP see http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml As an example AWS assigns an AS# of 7224 This AS# defines the autonomous system in which your VPC resides To establish a connection with AWS you have to assign an AS# to your CGW After communication is established between the CGW and the VGW they become external BGP peers and are considered BGP neighbors BGP neighbors exchange their predefined routing table (prefixlist) when the connection is first established and exchange incremental updates based on route changes Establishing neighbor relationships between two different ASNs is considered an External Border Gateway Protocol connection (eBGP) Establishing a connection between devices within the same ASN is considered an Internal Border Gateway Protocol connection (iBGP) BGP uses a TCP transport protocol port 179 to exchange routes between BGP neighbors Exchanging Routes between AWS and CGWs BGP uses ASNs to construct a vector graph of the network topology based on the prefixes exchanged between your CGW and VGW The connection between two ASNs forms a path and the collection of all these paths form a route used to reach a specific destination BGP carries a sequence of ASNs which indicate which routes are transversed To establish a BGP connection the CGW and VGW must be connected directly with each other While BGP supports BGP multihopping natively AWS VGW does not support multihopping All BGP neighbor connections have to terminate on the VGW Without a successful neighbor relationship BGP updates are not exchanged AWS does not support iBGP neighbor relationship between CGW and VGW AWSSupported BGP Metrics and Path Selection Algorithm The VGW receives routing information from all CGWs and uses the BGP best path selection algorithm to calculate the set of preferred paths The rules of that algorithm as it applies to VPC are: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 8 1 The most specific IP prefix is preferred (for example 10000/24 is preferable to 10000/16) For more information see Route Priority in the Amazon VPC User Guide 6 2 When the prefixes are the same statically configured VPN connections (if they exist) are preferred 3 For matching prefixes where each VPN connection uses BGP the algorithm compares the AS PATH prefixes and the prefix with the shortest AS PATH is preferred Alternatively you can prepend AS_PATH so that the path is less preferred 4 When the AS PATHs are the same length the algorithm compares the path origin s Prefixes with an Interior Gateway Protocol (IGP) origin are preferred to Exterior Gateway Protocol (EGP) origins and EGP origins are preferred to unknown origins 5 When the origins are the same the algorithm compares the router IDs of the advertising routes The lowest router ID is preferred 6 When the router IDs are the same the algorithm compares the BGP peer IP addresses The lowest peer IP address is preferred Finally AWS limits the number of routes per BGP session to 100 routes AWS will send a reset and tear down the BGP connection if the number of routes exceeds 100 routes per session AWS APN Partners – Direct Connect as a Service Direct Connect partners in the AWS Partner Network (APN) can help you establish sub1G highspeed connectivity as a service between your network and a Direct Connect location To learn more about how APN partners can help you extend your MPLS infrastructure to a Direct Connect location as a service see https://awsamazoncom/directconnect/partners/ ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 9 Colocation with AWS Direct Connect Colocation with Direct Connect means placing the CGW in the same physical facility as Direct Connect location (https://awsamazoncom/directconnect/partners/) to facilitate a local cross connect between the CGW and AWS devices Establishing network connectivity between your MPLS infrastructure and an AWS colocation center offers you an additional level of flexibility and control at the AWS interconnect If you are interested in establishing a Direct Connect connection in the Direct Connect facility you will need to order a circuit between your MPLS Provider and the Direct Connect colocation facility and connect the circuit to your device A second circuit will then need to be ordered through the AWS Direct Connect console from the CE/CGW to AWS Benefits AWS Direct Connect offers the following benefits: Traffic separation and isolation You can satisfy compliance requirements that call for data segregation You also have the ability to define a public and private VRF across the same Direct Connect connection and monitor specific data flows for security and billing requirements Traffic engineering granularity You have greater ability to define and control how data moves in to and out of your AWS environment You can define complex BGP routing rules filter traffic paths move data in to and out of one VPC to another VPC You also have the ability to define which data flows through which VRF This is particularly important if you need to satisfy specific compliance for data intransit Security and monitoring functionality If you choose to monitor onpremises communication you can span ports or install tools that monitor traffic across a particular VRF You can place firewalls in line to meet internal security requirements You can also control communication by enforcing certain IP addresses to communicate across specific VLANs Simplified integration of IT and data platforms in mergers and acquisitions In a merger and acquisition (M&A) scenario where both companies have the same MPLS provider you can ask the MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 10 provider to attach a network tonetwork interface ( NNI ) between the two companies This will enable both companies to have a direct path to Amazon VPCs Your colocation router can serve as a transit to allow for the exchange of routes between the two companies If the companies do not share the same MPLS provider the acquiring company can order an additional circuit from their CGW to the acquired compan y’s MPLS to the colocation router and carve out a VRF for that connection Considerations There are a few business and technology design requirements to consider if you are interested in setting up your router in a colocation facility: Design Requirements: The technical requirements for certain large enterprise customer can be complex A colocation infrastructure can simplify the integration with complex network designs especially if there is a need to manipulate routes or a need to extend a private MPLS network to the CGW PE/CE Management: Some MPLS providers offer managed Customer Equipment support bundled with their MPLS service offering Taking advantage of this service may reduce operational burden while taking advantage of the discounted bundled pricing that comes with the service Architecture Scenarios Colocation Architecture At a very high level a customer’s colocated CGW sits between the AWS VGW and the MPLS PE The CGW connects to AWS VGW over a cross connection and connects to the customers MPLS provider equipment over a last mile circuit (cross connect that may or may not reside in the same colocation facility) It is possible that the MPLS provider edge (PE) resides in the same direct connect facility In that situation two LOA’s will exist The first between your CGW and AWS and the second between your CGW and your MPLS provider The first LOA can be requested via AWS console and either you or the MPLS provider can request the second LOA via the direct connect facility ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 11 Figure 1 shows a physical colocation topology for single data center connectivity to AWS Figure 1: Single data center connection over MPLS with customermanaged CGW in a colo cation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram above will be a cross connection Figure 2 outlines the logical colocation topology for single data center connecti on to AWS In this scenario you establish an eBGP connection between the customer ’s colocat ed router/device and AWS We recommend that the customer also establish an eBGP connectivity from their CGW to the customer ’s MPLS PE Figure 2: Highlevel eBGP topology in a colocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 12 Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection NonColocation Topology At a high level there are two possible scenarios for a noncolocation architecture The first architectural consideration is a scenario where the customers MPLS or circuit provider has facility access to AWS Direct Connect facility You create an LOA request from AWS console and work with your MPLS provider to request the facility cross connection The secondary architectural consideration is a scenario where are customers MPLS provider does not have facility access and needs to work with one of our Direct Connect partners to extend a circuit from the MPLS PE to the AWS environment For a list of AWS partners please use this link: https://awsamazoncom/directconnect/partners/ The following noncolocation topology diagram shows how the MPLS providers PE is used as the CGW The customer can request their vendor to create the required 8021Q VLAN s on the vendors PE routers Note Some vendor s may c onsider this request a custom configuration so it is worth checking with the provider if this type of setup is supportable ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 13 Figure 3: Single dat a center connection over MPLS with vendor PE as CGW in a noncolocation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection Similar to the previous colocation BGP design the customer has to establish eBGP connections However this time instead of peering with a colocated device the customer can peer directly with the MPLS provider ’s PE Figure 4 shows an example of a the logical eBGP noncolocation topology Figure 4: Highlevel eBGP connection in a noncolocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 14 MPLS Architecture Scenarios The following three scenarios illustrate how you can integrate AWS into an MPLS architecture Scenario 1: MPLS Connectivity over a Single Circuit Architecture Topology The diagram below shows a highlevel architecture of how existing or new MPLS locations can be connected to AWS In this architecture customers can achieve any toany connectivity between their geographically dispersed office or data center locations with their VPC Figure 5 : Single MPLS connectivity into Amazon VPC Physical Topology The customer decides how much bandwidth is required to connect to their AWS Cloud Based on your last mile connectivity requirements one end of this circuit extends through the MPLS provider ’s point of presen ce (POP) to the Provider Equipment (PE) device The other end of the circuit terminates in a meet me room or telecom cage located in one of Direct Connect facilities The Direct ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 15 Connect facility will set up a crossconnection that extends the circuit to AWS devices Figure 6: Highlevel physical topology between AWS and MPLS PE The following are the prerequisites to establish an MPLS connection to AWS: 1 Create an AWS account if you don’t already have one 2 Create an Amazon VPC T o learn how to set up your VPC see http://docsawsamazoncom/AmazonVPC/latest/GettingStarted Guide/gettingstartedcreatevpchtml 3 Request an AWS Direct Connect connection by selecting the region and your partner of choice : http://docsawsamazoncom/directconnect/latest/UserGuide/Col ocationhtml 4 Once completed AWS will email you a Letter of Authorization (LOA ) which describes the circuit information at the Direct Connect facility 5 If the MPLS provider has facility access to the AWS Direct Connect facility they can establish the required cross connection based on the LOA If the MPLS provider is not already in the Direct Connect facility a new connection must be built into the facility or the MPLS provider can utilize a Direct Connect partner (tier 2 extension) to gain facility access ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 16 Once the physical circuit is up the next step is to establish IP data communication and routing between AWS the P E device and the customer ’s network Create a virtual interface to begin using your Direct Connect connection A virtual interface is an 8021Q Layer 2 VLAN that helps segment and direct the appropriate traffic over the Direct Connect interface You can create a public virtual interface to connect to public resources or a private virtual interface to connect to resources in your VPC To learn more about working with virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Work with your MPLS provider to create the corresponding 8021Q Layer 2 VLAN on the PE Once the layer 2 VLAN link is up the next step is to assign IP Addresses and establish BGP connectivity You can download the IP/BGP configuration information from your AWS Management Console which can act as a guide for setting up your IP/BGP connection To learn more about downloading the router configuration see http://docsawsamazoncom/directconnect/latest/UserGuide/getstartedhtml# routerconfig When the BGP communication is established from each location and routes are exchanged all locations connected to the MPLS network should be able to communicate with the attached VPC on AWS Make sure to verify any routing policy that may be implemented within the MPLS provider and Customer Network that may be undesirable Figure 7: Logical 8021q VLANs diagram ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 17 In the setup in Figure 7 you can create VLANs that connect your MPLS PE device to AWS VPC Each VLAN (represented by different colors) is tagged with a VLAN ID that identifies the logical circuit and isolates traffic from one VLAN to another Design Decisions and Criteria There are a few design considerations you should be aware of: Contact your MPLS provider to confirm support to create an 8021Q VLAN’s on their MPLS PE and if they have a VLAN ID preference (if they have multiple circuits utilizing the same physical Direct Connect interface they may require control of the VLAN ID) Validate the number of VPCs you will need to support your business and if VPC Peering will support your InterVPC communication For more information about VPC Peering see: http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/peering scenarioshtml If multiple circuits are using the same physical Direct Connect interface verify that the interface is configured for the appropriate bandwidth Validate if your business requirements or existing technology constraints such as IP overlap dictate the need to design complex VRF architectures NAT or complex interVPC routing Validate if your BGP routing policy requires complex BGP prefix configurations such as community strings ASPath Filtering etc You may have to consider a colocation design if: Your MPLS provider is unable to provide 8021Q VLAN configurations You have a requirement to implement additional complex routing functionalities that will require route path manipulation or stripping off AS# or integrating BGP communities with routes you are learning from AWS before injecting them into your routing domain See the following section for colocation scenarios ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 18 Exchanging Routes AWS supports only BGP v4 as the routing protocol of choice between your AWS VGW and CGW BGP v4 allows you to exchange routes dynamically between the AWS VGW and the customer CGW or MPLS provider edge (PE) There are a few design considerations when setting up your BGP v4 routing with AWS We will consider two basic topology scenarios Scenario 11 : MPLS PE as CGW – MPLS provider supports VLANs In this scenario the customer has plans to use the MPLS PE as their CGW The MPLS provider will be responsible for the following configuration changes on the PE: Set up 8021q VLANs required to support the number of VPCs or VLANs that the customer n eeds across the DX Connection Each VLAN will be assigned a /31 IP address (larger prefixes are supported if equipment does not support /31) Enable a BGP session between AWS and the MPLS provider’s PE across each VLAN Both the customer and the MPLS provider will have to agree on the BGP AS# to assign to the PE The peering relationship in this scenario will look similar to this: AWS ASN (7224) eBGP MPLS PE ASN eBGP Customer ASN Figure 8 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 19 Figure 8: BGP peering relationship Note The customer will have to work with the MPLS provider to limit the number of routes advertised to AWS to 100 routes per BGP peer session AWS will tear down the BGP sessions if more than 100 routes are received from the MPLS provider Scenario 12: CE is located in an AWS colocation facility In this scenario the customer plans to deploy a customer managed CGW in the Direct Connect colocation facility for the following reasons: 1 The MPLS provider cannot support multiple VLANs directly on their PE 2 The customer requires control of configuration changes and does not want to be restricted to the MPLS provider’s maintenance windows or other constraints The customer has to maintain strict technology configuration standards of all devices in their domain 3 The customer seeks to achieve the following additional technical objectives: a Ability to remove AWS BGP Community Strings or add BGP community strings before injecting routes into the customers MPLS network ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 20 b Ability to strip BGP AS number and/or inject routes into an IGP to support interVPC routing c A merger and acquisition scenario where the customer will terminate multiple MPLS circuits into their device to facilitate data migration into AWS d The customer plans to integrate each VLAN into its own VRF for compliance reasons or to support a complex routing functionality e The customer requires security demarcation such as a firewall between AWS and the customers MPLS network to meet internal security policies f The customer wants to extend their Private Layer 2 MPLS network to their CGW Colocation Physical Topology The end toend connection between AWS and the MPLS PE can be broken down into the following components as shown in Figure 9 Figure 9: End toend physical and logical connection VPC to Virtual Private Gateway VGW o This logical construct extends your VPC to the VGW For more information about VGW see http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC _VPNhtml VGW to colocated CGW ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 21 o The connection between the VGW to the colocated CGW is a physical cross connect that connects AWS equipment to the customers colocated CGW The logical connection from your VPC is extended over a Layer 2 VLAN across the cross connect to a port on the CGW CGW to MPLS PE: o This is the connection between the colocate d CGW and the MPLS PE The customer can order this circuit from their provider of choice After the physical topology is confirmed and tested the next step is to establish BGP connectivity between the following: AWS and the customer’s CGW The CGW and the MPLS PE As a best practice AWS recommends the use of VRFs to achieve high agility security and scalability VRFs provide an additional level of isolation across the routing domain to simplify troubleshooting See the article Connecting A Single customers router to Multiple VPC to learn more about how to deploy VRFs Similar to the BGP topology in scenario 11 the customer must assign an ASN # for each VRF Each eBGP peering relationship in this scenario will look like the following: VPC eBGP CGW eBGP MPLS PE eBGP Customer AS# Figure 10 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 22 Figure 10: BGP connection over 8021Q VLAN This topology offers the customer the highest level of control and flexibility at the cost of supporting colocated devices AWS recommends a best practice of building a highavailability colocation architecture that supports dual routers dual last mile circuits and dual direct connections In the previous scenario each virtual network interface (VIF) is associated with a single VLAN which in turn is associated with a unique eBGP peering session The colocation router acts as your CGW and exchanges routing updates across each VIF Scenario 2: Dual MPLS Connectivity to a Single Region Architecture Topology This architecture builds upon Scenario 1 and incorporates a highly available redundant connection to AWS The difference between Scenario 1 and Scenario 2 is the additional MPLS circuit in Scenario 2 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 23 Figure 11: Dual MPLS connection to a single AWS Region This whitepaper will consider two dual connectivity architectures in the way we considered single connectivity architecture The first architectural scenario will focus on the customer leveraging their MPLS Provider PE as their CGW and the second architectural scenario will focus on a colocati on strategy Architectural Scenario 21: MPLS PE as CGW In this scenario the customer plans to have dual connectivity from their MPLS network to AWS in the same region AWS APN partners offer geographically dispersed POP s if you want to have dual last mile connectivity to AWS For example if you are planning to connect to the USEast Region you can connect to a New York Point of Presence (POP) and to a Virginia Point of Presence (POP) as well POP diversity offers the highest level of redundancy resilience and availability from the POP and circuit diversity perspective You can be protected within a region from an MPLS circuit outage and MPLS POP outages Figure 12 depicts dual connectivity from geographically dispersed MPLS POP s to AWS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 24 Figure 12: Dual physical connection to multiple MPLS POPs Highly Available topology considerations In this scenario you can desig n an active/active or active/passive BGP routing topology Active/Passive An active/passive routing design calls for a routing policy that uses one path as primary and leverages a second path in the event that the primary circuit is down Active/Active An active/active routing design calls for a routing policy that load balances data across both MPLS last mile circuits as they send or receive data from AWS You can influence outbound traffic from AWS by advertising the routes using equal ASPath lengths Likewise AWS advertises routes from AWS equally across both circuits to your MPLS network You can also design your network to support perdestination routing where you send half your routes over one link and the other half over the second link Each link will serve as a redundant path for nonprimary destinations With this approach both circuits are used actively and only if any one of the links fail all traffic flow through the other link In either case the ASPath between the MPLS provider and AWS may resemble something like this: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 25 AWS ASN eBGP CGW ASN eBGP MPLS AS N Path 1 AWS ASN eBGP CGW ASN eBGP MPLS AS N Path 2 Figure 13 depicts a possible BGP topology design Figure 13: In region dual connectivity BGP topology An eBGP neighbor relationship is established between AWS and the two CGWs otherwise known as the provider PEs Similar to Scenario 1 you work with your MPLS provider to support 8021Q VLANs on your PE The routing topology can be more granular and can offer additional levels of traffic differentiation based on the design you select You can choose to direct all traffic that f its a specific profile across one physical link while using the secondary link as a failover path Each VPC can be presented with two logical direct connections (a single VGW per VPC) This allows you to load balance traffic from each VPC across each circuit by creating the required VLANs VIFs and establishing two BGP neighbor relationships across each VLAN ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 26 Figure 14: BGP routing topology scenario Connectivity from Two AWS Locations to a single MPLS POP There are a few situations where it can be better to have both customer devices (CGWs) in the same POP: MPLS providers may not have POPs close to each AWS POP location You may have a requirement for active/active circuit topology and your application is extremely sensitive to latency differenc es between the circuits originating from different POPs Due to MPLS POP diversity limitations one of the circuits may require a longhaul connectivity causing packets to arrive at different times which can impact the ability to load balance Redundant facilities and long haul termination may be cost prohibitive If you are faced with these issues you can still achieve regional diversity by connecting both DX locations to a single MPLS POP Design Decisions and Criteria The difference between an architecture with MPLS POP diversity and one without is geographical diversity However you must still exercise due diligence when setting up both circuits ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 27 1 Ensure you have end toend circuit diversity from your circuit provider Ensure circuits sharing the same conduit and/or fiber path leaving the facility and throughout the path to the final destination 2 Ensure the circuit does not terminate on the same switch or router to mitigate hardware failure 3 Ensure each device leverages different power source s and Layer 1 infrastructure These design principles are the same regardless of geographical diversity Architectural Scenario 2 2: CGW Colocated in AWS Facility The rationale to colocate are the same as those outlined in Scenario 1 If you decide that colocation is a good approach then you can design a highly available fully redundant architecture to a single region In this scenario the customer can colocate their equipment in AWS facility by either working with an AWS partner who has local facility access or by the customer setting up local facility access in one of our AWS Direct Connect facilities To achieve the higher level of redundancy resilience and scalability the customer can incorporate the following best practice designs: Dual connection between both CGW s A dual connection between the routers will allow you to accomplish the following: o Create a highly available path to each routing device o Extend each VLAN to each routing device in a highly available manner Dual connection from each CGW to two MPLS PEs This will provide a high level of resilience and redundancy between your CGW and PE Traffic can be load balanced and provide failover capability in the event of circuit or equipment failure ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 28 Figure 15: Dual circuit to a single MPLS POP BGP topology Conclusion AWS offers customers the ability to connect different WAN technologies in a highly reliable redundant and scalable way The goal of AWS is to ensure that customers are not limited by constraints when accessing their resources on AWS Contributors The following individuals and organizations contributed to this document: Authors o Jacob Alao Solutions Architect o Justin Davies Solutions Architect Reviewer o Aarthi Raju Partner Solutions Architect Further Reading For additional information about Layer 3 MPLS technology see the following: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 29 http://wwwnetworkworldcom/article/2297171/networksecurity/mpls explainedhtml http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html For additional Information about Layer 2 MPLS technology see the following : http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html Notes 1 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Introducti onhtml 2 http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml 3 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Internet_ Gatewayhtml 4 http://docsawsamazoncom/AmazonVPC/latest/NetworkAdminGuide/Intro ductionhtml 5 https://awsamazoncom/articles/5458758371599914 6 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Route_Ta bleshtml#routetablespriority Archived
|
General
|
consultant
|
Best Practices
|
Introduction_to_Auditing_the_Use_of_AWS
|
Archived Introduction to Auditing the Use of AWS Octob er 2015 THIS PAPER HAS BEEN ARCHIVED For the latest information see the Cloud Audit Academy eLearning: https://wwwawstraining/Details/eLearning?id=41556ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 2 of 28 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 3 of 28 Contents Abstract 4 Introduction 5 Approaches for using AWS Audit Guides 6 Examiners 6 AWS Provided Evidence 6 Auditing Use of AWS Concepts 8 Identifying assets in AWS 9 AWS Account Identifiers 9 1 Governance 10 2 Network Configuration and Management 14 3 Asset Configuration and Management 15 4 Logical Access Control 17 5 Data Encryption 19 6 Security Logging and Monitoring 20 7 Security Incident Response 21 8 Disaster Recovery 22 9 Inherited Controls 23 Appendix A: References and Further Reading 25 Appendix B: Glossary of Terms 26 Appendix C: API Calls 27 ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 4 of 28 Abstract Security at AWS is job zero All AWS customers benefit from a data center and network architecture built to satisfy the needs of the most securitysensitive organizations In order to satisfy these needs AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities will be shared By tying together governancefocused audit friendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment AWS manages the underlying infrastructure and you manage the security of anything you deploy in AWS AWS as a modern platform allows you to formalize the design of security as well as audit controls through reliable automated and verifiable technical and operational processes built into every AWS customer account The cloud simplifies system use for administrators and those running IT and makes your AWS environment much simpler to audit sample testing as AWS can shift audits towards a 100% verification verses traditional sample testing Additionally AWS ’ purposebuilt tools can be tailored to customer requirements and scaling and audit objectives in addition to supporting realtime verification and reporting through the use of internal tools such as AWS CloudTrail Config and CloudWatch These tools are built to help you maximize the protection of your services data and applications This means AWS customers can spend less time on routine security and audit tasks and are able to focus more on proactive measures which can continue to enhance security and audit capabilities of the AWS customer environment ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 5 of 28 Introduction As more and more customers deploy workloads into the cloud auditors increasingly need not only to understand how the cloud works but additionally how to leverage the power of cloud computing to their advantage when conducting audits The AWS cloud enables auditors to shift from percentagebased sample testing toward a comprehensive realtime audit view which enables 100% auditability of the customer environment as well as realtime risk management The AWS management console along with the Command Line Interface (CLI) can produce powerful results for auditors across multiple regulatory standards and industry authorities This is due to AWS supporting a multitude of security configurations to establish security compliance by design and realtime audit capabilities through the use of: Automation Scriptable infrastructure (eg Infrastructure as Code) allows you to create repeatable reliable and secure deployment systems by leveraging programmable (APIdriven) deployments of services Scriptable Architectures – “Golden” environments and Amazon Machine Images (AMIs) can be deployed for reliable and auditable services and they can be constrained to ensure realtime risk management Distribution Capabilities provided b y AWS CloudFormation give systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion Verifiable Using AWS CloudTrail Amazon CloudWatch AWS OpsWorks and AWS CloudHSM enables evidence gathering capability ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 6 of 28 Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shar ed Responsibility” model between AWS and the customer The audit guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements In general AWS services should be treated similar ly to onpremise infrastructure services that have been traditionally used by customer s for operating services and applications Policies and processes that apply to devices and servers should also apply when those functions are supplied by AWS Controls pertaining solely to policy or pr ocedure are generally entirely the responsibility of the customer Similarly AWS management either via the AWS Console or Command Line API should be treated like other privileged administrator access See the appendix and referenced points for more information AWS Provided Evidence Amazon Web Services Cloud Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities will be shared Each certification means that an auditor has verified that specific security controls are in place and operating as intended You can view the applicable compliance reports by contacting your AWS account representative For more information about the security regulations and standards with which AWS complies visit the AWS Compliance webpage To help you meet specific government industry and company security standards and regulations AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards including: ISO 27001 SOC the PCI Data Security Standard FedRAMP the Australian Signals Directorate (ASD) Information Security Manual and the Singapore MultiTier Cloud Security Standard (MTCS SS 584) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 7 of 28 For more information about the security regulations and standards with which AWS complies see the AWS Compliance webpage ArchivedAuditing Use of AWS Concepts The following concepts should be considered during a security audit of an organization’s systems and da ta on AWS: Security measures that the cloud service provider (AWS) implements and operates – "security of the cloud" Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – "security in the cloud" While AWS manages security of the cloud security in the cloud is the responsibility of the customer Customers retain control of what security they choose to implement to protect their own content platform applications systems and networks no differently than they would for applications in an on site datacenter Additional detail can be found at the AWS Security Center at AWS Compliance and in the publically available AWS whitepapers found at: AWS Whitepapers ArchivedIdentifying assets in AWS A customer’s AWS assets can be instances data stores applications and the data itself Auditing the use of AWS generally starts with asset identification Assets on a public cloud infrastructure are not categorically different than in house environments and in some situations can be less complex to inventory because AWS provides visibility into the assets under management AWS Account Identifiers AWS assigns two unique IDs to each AWS account: an AWS account ID and a canonical user ID The AWS account ID is a 12digit number such as 123456789012 that you use to construct Amazon Resource Names (ARNs) When you refer to resources like an IAM user or an Amazon Glacier vault the account ID distinguishes your resources from resources in other AWS accounts Amazon Resource Names (ARNs) and AWS Service Namespaces Amazon Resource Names (ARNs) uniquely identify AWS resources We require an ARN when you need to specify a resource unambiguously across all of AWS such as in IAM policies Amazon Relational Database Service (Amazon RDS) tags and API calls ARN Format example: In addition to Account Identifiers Amazon Resource Names (ARNs) and AWS Service Namespaces each AWS service creates a unique service identifier (eg Amazon Elastic Compute Cloud (Amazon EC2) instance ID: i3d68c5cb or Amazon Elastic Block Store (Amazon EBS) Volume ID volecd8c122) which can be used to create an environmental asset inventory and used within work papers for scope of audit and inventory Each certification means that an auditor has verified that specific security controls are in place and operating as intended Archived Amazon Web Services – OCIE Cybersecurity Audit Guide September 2015 Page 10 of 28 1 Governance Definition: Governance provides assurance that customer direction and intent are reflected in the se curity posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services have been purchased what kin ds of systems and information you plan to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Understand what AWS services and resources are being used and ensur e your security or risk management program has taken into account the use of the public cloud environment Audit approach: As part of this audit determine who within your organization is an AWS account and resource owner as well as the AWS services and resources they are using Verify policies plans and procedures include cloud concepts and that cloud is included in the scope of the customer ’s audit program Governance Checklist Checklist Item Understand use of AWS within your organization Approaches might include: Polling or interviewing your IT and development teams Performing network scans or a more indepth penetration test Review expense reports and/or Purchase Orders (PO’s) payments related to Amazoncom or AWS to understand what services are being used Credit card charges appear as “AMAZON WEB SERVICES AWSAMAZONCO WA” or similar Note: Some individuals within your organization may have signed up for an AWS account under their personal accounts as such consider asking if this is the case when polling or interviewing your IT and development teams Identify assets Each AWS account has a contact ema il address associated with it and can be used to identify account owners It is important to understand that this email address may be from a public email service provider depending on what the user specified when registering A formal meeting can be conducted with each AWS account or asset owner to ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 11 of 28 Checklist Item understand what is being deployed on AWS how it is managed and how it has been integrated with your organization’s security policies procedures and standards Note : The AWS Accou nt owner may be someone in the finance or procurement department but the individual who implements the organization’s use of the AWS resources may reside in the IT department You may need to interview both Define your AWS boundaries for review The review should have a defined scope Understand your organization’s core business processes and their alignment with IT in its noncloud form as well as current or future cloud imple mentations Obtain a description of the AWS services being used and/or being considered for use After identifying the types of AWS services in use or under consideration determine the services and business solutions to be included in the review Obtain and review any previous audit reports with remediation plans Identify open issues in previous audit reports and assess updates to the documents with respect to these issues Assess policies Assess and review your organization’s securit y privacy and data classification policies to determine which policies apply to the AWS service environment Verify if a formal policy and/or process exists around the acquisition of AWS services to determine how purchase of AWS services is authorized Verify if your organization’s change management processes and policies include consideration of AWS services Identify risks Determine whether a risk assessment for the applicable assets has been performed Review risks Obtain a copy of any risk assessment reports and determine if they reflect the current environment and accurately describe the residual risk environment Review risks documentation After each element of your review review risk treatment plans and timelines/ milestones against your risk management policies and ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 12 of 28 Checklist Item procedures Documentation and Inventory Verify your AWS network is fully docume nted and all AWS critical systems are included in their inventory documentation with limited access to this documentation Review AWS Config for AWS resource inventory and configuration history of resources (Exampl e API Call 1) Ensure that resources are appropriately tagged and associated with application data Review application architecture to identify data flows planned connectivity between application components and r esources that contain data Review all connectivity between your network and the AWS Platform by reviewing the following: VPN connections where the customers on premise Public IPs are mapped to customer gateways in any VPCs owned by the Customer (Example API Call 2 & 3) Direct Connect Private Connections which may be mapped to 1 or more VPCs owned by the customer (Example API Call 4 ) Evaluate risks Evaluate the significance of the AWS deployed data to the organization’s overall risk profile and risk tolerance Ensure that these AWS assets are integrated into the organization’s formal risk assessmen t program AWS assets should be identified and have protection objectives associated with them depending on their risk profiles Incorporate use of AWS into risk assessment Conduct and/or incorporate AWS service elements into your organizational risk assessment processes Key risks could include: Identify the business risk associated with your use of AWS and identify business owners and key stakeholders Verify that the business risks are aligned rated or classified within your use of AWS services and your organizational security criteria for protecting confidentiality ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 13 of 28 Checklist Item integrity and availability Review previous audits related to AWS services (SOC PCI NIST 800 53 related audits etc) Determine if the risks identified previously have been appropriately addressed Evaluate the overall risk factor for performing your AWS review Based on the risk assessment identify changes to your audit scope Discuss the risks with IT management and adjust the risk assessment IT Security Program and Policy Verify that the customer includes AWS services in its security policies and procedures including AWS account level best practices as highlighted within the AWS s ervice Trusted Advisor which provides best practice and guidance across 4 topics – Security Cost Performance and Fault Tolerance Review your information security policies and ensure that it includes AWS services Confirm you have has assigned an employee(s) as authority for the use and security of AWS services and there are defined roles for those noted key roles including a Chief Information Security Officer Note : any published cybersecurity risk management proces s standards you have used to model information security architecture and processes Ensure you maintain documentation to support the audits conducted for AWS services including its review o f AWS third party certifications Verify internal training records include AWS security such as Amazon IAM usage Amazon EC2 Security Groups and remote access to Amazon EC2 instances Confirm a cybersec urity response policy and training for AWS services is maintained Note : any insurance specifically related to the customers use of AWS services and any claims related to losses and expenses attributed to cybersecurity events as a result Service Provider Oversight Verify the contract with AWS includes a requirement to implement and maintain privacy and security safeguards for cybersecurity requirements ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 14 of 28 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS whether additional transmission protection is needed in the form of a VPN and whether to limit inbound and outbound traffic Customers who must perform monitoring of their network can do so using hostbased intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer ’s private networks Note : AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network seg mentation is applied within the AWS environmen t Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and firewall setting or AWS services (Example API Call 5 8) Verify you have a procedure for granting remote Internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and systems Review the following to maintain an enviro nment for testing and development of software and applications that is separate from it s business environment: VPC isolation is in place between business environment and environments used for test and development By reviewing VPC peering connectivity betw een VPCs to ensure network ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 15 of 28 Checklist Item isolation is in place between VPCs Subnet isolation is in place between business environment and environments used for test and development By reviewing NACLs associated to Subnets in which Business and Test/Development environm ents are located to ensure network isolation is in place Amazon EC2 instance isolation is in place between business environment and environments used for test and development By reviewing Security Groups associated to 1 or more Instances which are associated with Business Test or Development environments to ensure network isolation is in place between Amazon EC2 instances Review DDoS layered defense solution running which operates directly on AWS reviewing components which are l everaged as part of a DDoS solution such as: Amazon CloudF ront configuration Amazon S3 configuration Amazon Route 53 ELB configuration Note: The above services do not use Customer owned Public IP addresses and offer DoS AWS inherited DoS mitigation feature s Usage of Amazon EC2 for Proxy or WAF Further guidance can be found within the “ AWS Best Practices for DDoS Resiliency Whitepaper” Malicious Code Contr ols Assess the implementation and management of anti malware for Amazon EC2 instances in a similar manner as with physical systems 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything installed on AWS resources or connect to AWS resources Secure management of the customer ’s AWS resources means knowing what resources you are using (asset inventory) securely configuring the guest OS and applications on your resources (secure configuration settings patching and anti malware) and controlling changes to the resources (change management) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 16 of 28 Major audit focus: Manage your operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate the OS and applications are designed configured patched and hardened in accordance with your policies procedures and standards All OS and application management practices can be common between on premise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Assess configuration management Verify the use of your configuration management practices for all AWS system components and validate that these standards meet baseline configurations • Review t he procedure for conduct ing a specialized wipe proc edure prior to deleting the volume for compliance with established requirements • Review your Identity Access Management system (which may be used to allow authenticated access to the applications hosted on top of AWS servic es) • Confirm penetration testing has been completed Change Management Controls Ensure use of AWS services follows the same change cont rol pro cesses as internal series Verify AWS services are included within an internal patch management process Review d ocumented process for c onfiguration and patching of Amazon EC2 instances: Amazon Machine Images (AMIs) (Example API Call 9 10) Operating systems Applications Review API calls for in scope services for delete calls to ensure IT assets have been properly disposed of ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 17 of 28 4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but also the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer ’s corporate directory (single signon) AWS Identity and Access Management (IAM) enables users to securely control access to AWS services and resources Using IAM you can create and manage AWS users and groups and use permissions to allow and deny permissions to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up for the services in AWS It is also important to ensure you are securely managing the credentials associated with all AWS accounts Audit approach: Validat e permissions for AWS assets are being managed in accordance with organizational policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for manag ing access to AWS services and Amazon EC2 instances Ensure documentation of use and configuration of AWS access controls examples and options outlined below: Description of how Amazon IAM is used for access management List of controls that Amazon IAM is used to manage – Resource management Security Groups VPN object permissions etc Use of native AWS access controls or if access is managed through ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 18 of 28 Checklist Item federated authentication which leverages the open sta ndard Security Assertion Markup Language (SAML) 20 List of AWS Accounts Roles Groups and Users Policies and policy attachments to users groups and roles (Example API Call 11) A description of Am azon IAM acco unts and roles and monitoring methods A description and configuration of systems within EC2 Remote Access Ensure there is an approval process logging process or controls to prevent unauthorized remote access Note: All access to AWS and Amazon EC2 instances is “remote access” by definition unless Direct Connect has been configured • Review process for preventing unauthorized access which may include: AWS CloudT rail for l ogging of Service level API calls AWS Clou dWatch logs to meet logging objectives IAM Policies S3 Buc ket Policies Security Groups for controls to prevent unauthorized access Review connectivity between firm network and AWS: VPN Connection between VPC and firm’s network Direct Connect (cross connect and private interfaces) between firm and AWS Defined Security Groups Network Access Control Lists and Routing tables in order to cont rol access between AWS and the network Personnel C ontrol Ensure restric tion of users to those AWS services strictly for their business function (Example API Call 12) Review the type of access control in place as it relates to A WS services AWS access control at an AWS level – using IAM with Tagging to control management of Amazon EC2 instances (start/stop/terminate) within networks Customer Access Control – using IAM (LDAP solution) to manage access to resources w hich exist in networks at the Operating System / Application layers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 19 of 28 Checklist Item Network Access control – using AWS Security Groups ( SGs) Network Access Control Lists (NACLs) Routing Tables VPN Connections VPC Peering to control network access to resources within customer owned VPCs 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However customers who have sensitive data may require additional protection by encrypting the data when it is stored on AWS Only the Amazon S3 service currently provides an automated server side encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data Major audit focus: Data at rest should be encrypted in the same way as on premise data is protected Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper protection of data could create a security exposure Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential information in transport while using AWS services Review methods for connection to AWS Console management API S3 RDS and Amazon EC2 VPN for enforcement of encryption Review internal policies and procedures for key management including AWS services and Amazon EC2 instances Review e ncryption methods used if any to protect PINs at rest – AWS offer s a number of key management services such as KMS CloudHSM and Server Side ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 20 of 28 Checklist Item Encryption for S3 whic h could be used to assist with data at r est encryptio n (Example API Call 13 15) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within your information systems and networks Audit logs are used to identify activity that may impact the security of those systems whether in realtime or after the fact so the proper configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on Amazon EC2 instances and that implementation is in alignment with your policies and procedures especially as it relates to the storage protection and analysis of the logs Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review logging and monitoring policies and procedures for adequacy retention defined thresholds and secure maintenance specifically for detecting unauthori zed activity for AWS services Review logging and monitoring policies and procedures and ensure the inclusion of AWS services including Amazon EC2 instances for security related events Verify that logging mechanisms are conf igured to send logs to a centralized server and ensure that for Amazon EC2 instances the proper type and format of logs are retained in a similar manner as with physical systems For customers usi ng AWS CloudWatch review the process and re cord of the use of network monitoring Ensure analytics of events are utilized to improve defensive measures and policies Review AWS IAM Credential report for unau thorized users AWS Config and resource tagging for u nauthorized devices (Example API Call 16 ) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 21 of 28 Checklist Item Confirm aggregation and correlation of event data from multiple sources using AWS services such as: VPC Flow lo gs to identify accepted/rejected network packets entering VPC AWS CloudT rail to identify authenticated and unauthenticated API calls to AWS services ELB Logging – Load balancer logging AWS CloudF ront Logging – Logging of CDN distributions Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems Review AWS provided evidence on where information on intrusion detection processes can be reviewed 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may by monitored by the interaction of both AWS and the AWS customer AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application You should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporting Ensure the incident response plan and policy for cybersecurity incidents includes AWS services and addresses controls that mitigate cybersecurity ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 22 of 28 incidents and aid recovery Ensure leveraging of existing incident monitoring tools as well as AWS available tools to monitor the use of AW S services Verify that the Incident Response Plan undergoes a periodic review and changes related to AWS are made as needed Note if the Incident Response Plan has notification procedures and how the customer addresses responsibility for losses associated with attacks or impacting instructions 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design often utilizes multiple components in different AWS availability zones and involve data replication Audit approach: Understand the DR and determine the faulttolerant architecture employed for critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 23 of 28 Disaster Recovery Checklist : Checklist Item Business Continuity Plan (BCP) Ensure there is a comprehensive BCP for AWS services utilized that addresses mitigation of the effects of a cybersecurity incident and/or recover from such an incident Within the Plan ensure that AWS is included in the emergency preparedness and crisis management elements senior manager oversight responsibilities and the testing plan Backup and Storage Controls Review the customer’s periodic test of their backup system for AWS services (Example API Call 17 18) 1 Review inventory of data backed up to AWS services as off site backup 9 Inherited Controls Definition: Amazon has m any years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if he or she continues to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate appropriate due diligence in selecting service providers ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 24 of 28 Audit approach: Understand how you can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can be reviewed that are managed by AWS for physical sec urity controls Conclusion There are many thirdparty tools that can assist you with your assessment Since AWS customers have full control of their operating systems network settings and traffic routing a majority of tools used inhouse can be used to assess and audit the assets in AWS A useful tool provided by AWS is the AWS Trusted Advisor tool AWS Trusted Advisor draws upon best practices learned from AWS’ aggreg ated operational history of serving hundreds of thousands of AWS customers The AWS Trusted Advisor performs several fundamental checks of your AWS environment and makes recommendations when opportunities exist to save money improve system performance or close security gaps This tool may be leveraged to perform some of the audit checklist items to enhance and support your organizations auditing and assessment processes ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 25 of 28 Appendix A: References and Further Reading 1 Amazon Web Services: Overview of Security Processes https://d0awsstaticcom/whitepapers/Security/AWS%20Security%20Whitepape rpdf 2 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Compliance_ Whitepaperpdf 3 AWS OCIE Cybersecurity Workbook https://d0awsstaticcom/whitepapers/compliance/AWS_SEC_Workbookpdf 4 Using Amazon Web Services for Disaster Recovery http://mediaamazonwebservicescom/AWS_Disaster_Recoverypdf 5 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 6 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=searchQuery &x=20&y=25&fromSearch=1&searchPath=all&searchQuery=identity%20federati on 7 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8&queryAr g=searchQuery&fromSearch=1&searchQuery=Token%20Vending%20machine 8 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 9 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 10 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 26 of 28 Appendix B: Glossary of Terms Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make web scale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 27 of 28 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 3 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 4 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 5 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 6 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 7 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 8 Alternatively use Security Group focused CLI: aws ec2 describesecuritygroups 9 List AMI currently owned/registered by the customer aws ec2 describeimages –owners self 10 List all Instances launched with a specific AMI aws ec2describeinstances filters “Name=image idValues=XXXXX” (where XXXX = imageid value eg ami12345a12 ArchivedAmazon Web Services – Introduction to Auditing the Use of AWS October 2015 Page 28 of 28 11 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 12 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies role name XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 13 List KMS Keys aws kms listaliases 14 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 15 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes "Name=encryptedValues=true" targeted eg useast 1) 16 Credential Report aws iam generatecredentialreport aws iam get credentialreport 17 Create Snapshot/Backup of EBS volume aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 18 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX)
|
General
|
consultant
|
Best Practices
|
Introduction_to_AWS_Security_by_Design
|
1 of 14 Introduction to AWS Security by Design A Solution to Automate Security Compliance and Auditing in AWS November 2015 Amazon Web Services – Introduction Secure by Design November 2015 2 of 14 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Amazon Web Services – Introduction Secure by Design November 2015 3 of 14 Contents Abstract 4 Introduction 5 Security in the AWS Environment 5 Security by Design: Overview 6 Security by Design Approach 6 Impact of Security by Design 8 SbD Approach Details 9 SbD: How to Get Started 12 4 of 14 Abstract Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing This whitepaper discusses the concepts of Security by Design provides a fourphase approach for security and compliance at scale across multiple industries points to the resources available to AWS customers to implement security into the AWS environment and describes how to validate controls are operating 5 of 14 Introduction Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing It is a systematic approach to ensure security; instead of relying on auditing security retroactively SbD provides you with the ability to build security control in throughout the AWS IT management process SbD encompasses a fourphase approach for security and compliance at scale across multiple industries standards and security criteria AWS SbD is about designing security and compliance capabilities for all phases of security by designing everything within the AWS customer environment: the permissions the logging the use of approved machine images the trust relationships the changes made enforcing encryption and more SbD enables customers to automate the frontend structure of an AWS account to make security and compliance reliably coded into the account Security in the AWS Environment The AWS infrastructure has been designed to provide the highest availability while putting strong safeguards in place regarding customer privacy and segregation When deploying systems in the AWS Cloud AWS and its customers share the security responsibilities AWS manages the underlying infrastructure while your responsibility is to secure the IT resources deployed in AWS AWS allows you to formalize the application of security controls in the customer platform simplifying system use for administrators and allowing for a simpler and more secure audit of your AWS environment There are two aspects of AWS security: Security of the AWS environment The AWS account itself has configurations and features you can use to build in security Identities logging functions encryption functions and rules around how the systems are used and networked are all part of the AWS environment you manage Security of hosts and applications The operating systems databases stored on disks and the applications customers manage need security protections as well This is up to the AWS customer to manage Security process tools Amazon Web Services – Introduction Secure by Design November 2015 6 of 14 and techniques which customers use today within their onpremise environments also exist within AWS The Security by Design approach here applies primarily to the AWS environment The centralized access visibility and transparency of operating with the AWS cloud provides for increased capability for designing endtoend security for all services data and applications in AWS Security by Design: Overview SbD allows customers to automate the fundamental structure to reliably code security and compliance of the AWS environment making it easier to render noncompliance for IT controls a thing of the past By creating a secure and repeatable approach to the cloud infrastructure approach to security; customers can capture secure and control specific infrastructure control elements These elements enable deployment of security compliant processes for IT elements such as predefining and constraining the design of AWS Identify and Access Management (IAM) AWS Key Management Services (KMS) and AWS CloudTrail SbD follows the same general concept as Quality by Design or QbD Quality by Design is a concept first outlined by quality expert Joseph M Juran in Juran on Quality by Design Designing for quality and innovation is one of the three universal processes of the Juran Trilogy in which Juran describes what is required to achieve breakthroughs in new products services and processes The general shift in manufacturing companies moving to a QbD approach is to ensure quality is built into the manufacturing process moving away from using postproduction quality checks as the primary way in which quality is controlled As with QbD concepts Security by Design can also be planned executed and maintained through system design as a reliable way to ensure realtime scalable and reliable security throughout the lifespan of a technology deployment in AWS Relying on the audit function to fix present issues around security is not reliable or scalable Security by Design Approach SbD outlines the inheritances the automation of baseline controls the operationalization and audit of implemented security controls for AWS infrastructure operating systems services and applications running in AWS This Amazon Web Services – Introduction Secure by Design November 2015 7 of 14 standardized automated and repeatable architectures can be deployed for common use cases security standards and audit requirements across multiple industries and workloads We recommend building in security and compliance into your AWS account by following a basic fourphase approach: • Phase 1 – Understand your requirements Outline your policies and then document the controls you inherit from AWS document the controls you own and operate in your AWS environment and decide on what security rules you want to enforce in your AWS IT environment • Phase 2 – Build a “secure environment” that fits your requirements and implementation Define the configuration you require in the form of AWS configuration values such as encryption requirements (forcing server side encryption for S3 objects) permissions to resources (which roles apply to certain environments) which compute images are authorized (based on hardened images of servers you have authorized) and what kind of logging needs to be enabled (such as enforcing the use of CloudTrail on all resources for which it is available) Since AWS provides a mature set of configuration options (with new services being regularly released) we provide some templates for you to leverage for your own environment These security templates (in the form of AWS CloudFormation Templates) provide a more comprehensive rule set that can be systematically enforced We have developed templates that provide security rules that conform to multiple security frameworks and leading practices These prepackaged industry template solutions are provided to customers as a suite of templates or as stand alone templates based on specific security domains (eg access control security services network security etc) More help to create this “secure environment” is available from AWS experienced architects AWS Professional Services and partner IT transformation leaders These teams can work alongside your staff and audit teams to focus on high quality secure customer environments in support of thirdparty audits • Phase 3 – Enforce the use of the templates Enable Service Catalog and enforce the use of your template in the catalog This is the step which enforces the use of your “secure environment” in new Amazon Web Services – Introduction Secure by Design November 2015 8 of 14 environments that are being created and prevents anyone from creating an environment that doesn’t adhere to your “secure environment” standard rules or constraints This effectively operationalizes the remaining customer account security configurations of controls in preparation for audit readiness • Phase 4 – Perform validation activities Deploying AWS through Service Catalog and the “secure environment” templates creates an auditready environment The rules you defined in your template can be used as an audit guide AWS Config allows you to capture the current state of any environment which can then be compared with your “secure environment” standard rules This provides audit evidence gathering capabilities through secure “read access” permissions along with unique scripts which enable audit automation for evidence collection Customers will be able to convert traditional manual administrative controls to technically enforced controls with the assurance that if designed and scoped properly the controls are operating 100% at any point in time versus traditional audit sampling methods or pointintime reviews This technical audit can be augmented by preaudit guidance; support and training for customer auditors to ensure audit personnel understand the unique audit automation capabilities which the AWS cloud provides Impact of Security by Design SbD Architecture is meant to achieve the following: • Creating forcing functions that cannot be overridden by the users without modification rights • Establishing reliable operation of controls • Enabling continuous and realtime auditing • The technical scripting your governance policy The result is an automated environment enabling the customer’s security assurance governance security and compliance capabilities Customers can now get reliable implementation of what was previously written in policies standards and regulations Customers can create enforceable security and compliance which in turn creates a functional reliable governance model for AWS customer environments Amazon Web Services – Introduction Secure by Design November 2015 9 of 14 SbD Approach Details Phase 1 – Understand Your Requirements Start by performing a security control rationalization effort You can create a security Controls Implementation Matrix (CIM) that will identify inherency from existing AWS certifications accreditations and reports as well as identify the shared customer architecture optimized controls which should be implemented in any AWS environment regardless of security requirements The result of this phase will provide a customer specific map (eg AWS Control Framework) which will provide customers with a security recipe for building security and compliance at scale across AWS services CIM works to map features and resources to specific security controls requirements Security compliance and audit personnel can leverage these documents as a reference to make certifying and accrediting of systems in AWS more efficient The matrix outlines control implementation reference architecture and evidence examples which meet the security control “risk mitigation” for the AWS customer environment Figure 1: NIST SP 80053 rev 4 control security control matrix • Security Services Provided (Inherency) Customers can reference and inherit security control elements from AWS based on their industry and the AWS associated certification attestation and/or report (eg PCI FedRAMP ISO etc) The inheritance of controls can vary based on certifications and reports provided by AWS • Cross Service Security (Shared) Cross service security controls are those which both AWS and the customer implement within the host operating system and the guest operating systems These controls include technical operational and administrative (eg IAM Security Groups Configuration Management etc) controls which in some case can be partially inherited (eg Fault Amazon Web Services – Introduction Secure by Design November 2015 10 of 14 Tolerance) Example: AWS builds its data centers in multiple geographic regions as well as across multiple Availability Zones within each region offering maximum resiliency against system outages Customers should leverage this capability by architecting across separate Availability Zones in order to meet their own fault tolerance requirements • Service Specific Security (Customer) Customer controls may be based on the system and services they deploy in AWS These customer controls may also be able to leverage several cross service controls such as IAM Security Groups and defined configuration management processes • Optimized IAM Network and Operating Systems (OS) Controls These controls are security control implementations or security enhancements an organization should deploy based on leading security practices industry requirements and/or security standards These controls typically cross multiple standards and service and can be scripted as part of a defined “secure environment” through the use of AWS CloudFormation templates and Service Catalog Phase 2 – Build a “Secure Environment” This enables you to connect the dots on the wide range of security and audit services and features we offer and provide security compliance and auditing personnel a straightforward way to configure an environment for security and compliance based on “least privileges” across the AWS customer environment This helps align the services in a way that will make your environment secure and auditable real time verses within point in time or period in time • Access Management Create groups and roles like developers testers or administrators and provide them with their own unique credentials for accessing AWS cloud resources through the use of groups and roles • Network Segmentation Set up subnets in the cloud to separate environments (that should remain isolated from one another) For example to separate your development environment from your production environment and then configure network ACLs to control how traffic is routed between them Customers can also set up separate management environments to ensure security integrity through the use of a Bastion host for limiting direct access to Amazon Web Services – Introduction Secure by Design November 2015 11 of 14 production resources • Resource Constraints & Monitoring Establish hardened guest OS and services related to use of Amazon Elastic Compute Cloud (Amazon EC2) instances along with the latest security patches; perform backups of your data; and install antivirus and intrusion detection tools Deploy monitoring logging and notification alarms • Data Encryption Encrypt your data or objects when they’re stored in the cloud either by encrypting automatically on the cloud side or on the client side before you upload it Phase 3 – Enforce the Use of Templates After creating a “secure environment” you need to enforce its use in AWS You do this by enforcing Service Catalog Once you enforce the Service Catalog everyone with access to the account must create their environment using the CloudFormation templates you created Every time anyone uses the environment all those “secure environment” standard rules and/or constraints will be applied This effectively operationalizes the remaining customer account security configurations of controls and prepares you for audit readiness Phase 4 – Perform Validation Activities The goal of this phase is to ensure AWS customers can support an independent audit based on public generallyaccepted auditing standards Auditing standards provide a measure of audit quality and the objectives to be achieved when auditing a system built within an AWS customer environment AWS provides tooling to detect whether there are actual instances of noncompliance AWS Config gives you the pointintime current settings of your architecture You can also leverage AWS Config Rules a service that allows you to use your secure environment as the authoritative criteria to execute a sweeping check of controls across the environment You’ll be able to detect who isn’t encrypting who is opening up ports to the Internet and who has databases outside a production VPC Any measurable characteristic of any AWS resource in the AWS environment can be checked The ability to do a sweeping audit is especially valuable if you are working on an AWS account for which you did not first establish and enforce a secure environment This allows you to check the entire account no matter how it was Amazon Web Services – Introduction Secure by Design November 2015 12 of 14 created and audit it against your secure environment standard With AWS Config Rules you can also continually monitor it and the console will show you at any time which IT resources are and aren’t in compliance In addition you will know if a user was out of compliance even if for a brief period of time This makes pointintime and periodintime audits extremely effective Since auditing procedures differ across industry verticals AWS customers should review the audit guidance provided based on their industry vertical If possible engage audit organizations that are “cloudaware” and understand the unique audit automation capabilities that AWS provides Work with your auditor to determine if they have experience with auditing AWS resources; if they do not AWS provides several training options to address how to audit AWS services through an instructorled eighthour class including handson labs For more information please contact: awsaudittraining@amazoncom Additionally AWS provides several audit evidence gathering capabilities through secure read access along with unique API (Application Programming Interface) scripts which enable audit automation for evidence collection This provides auditors the ability to perform 100% audit testing (versus testing with a sampling methodology) SbD: How to Get Started Here are some starter resources for you to get you and your teams ramped up: • Take the selfpaced training on “Auditing your AWS Architecture” This will allow for hands on exposure to the features and interfaces of AWS in particular the configuration options that are available to auditors and security control owners • Request more information about how SbD can help email: awssecuritybydesign@amazoncom • Be familiar with additional relevant resources available to you: o Amazon Web Services: Overview of Security Processes o Introduction to Auditing the Use of AWS Whitepaper o Federal Financial Institutions Examination Council (FFIEC) Audit Guide Amazon Web Services – Introduction Secure by Design November 2015 13 of 14 o SEC Cybersecurity Initiative Audit Guide Further Reading • AWS Compliance Center: http://awsamazoncom/compliance • AWS Security by Design: http://awsamazoncom/compliance/securitybydesign • AWS Security Center: http://awsamazoncom/security • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Compliance_Whitepaperpdf • Security Best Practices Whitepaper: https://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/contact/ • Government and Education on AWS https://awsamazoncom/governmenteducation/ • AWS Professional Services https://awsamazoncom/professionalservices
|
General
|
consultant
|
Best Practices
|
Introduction_to_AWS_Security_Processes
|
ArchivedIntroduction to AWS Security Processes June 2016 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 2 of 45 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates su ppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 3 of 45 Table of Contents Introduction 5 Shared Security Responsibility Model 5 AWS Security Responsibilities 6 Customer Security Responsibilities 7 AWS Global Security Infrastructure 7 AWS Compliance Programs 8 Physical and Environmental Security 9 Fire Detection and Suppression 9 Power 9 Climate and Temperature 9 Management 10 Storage Device Dec ommissioning 10 Business Continuity Management 10 Availability 10 Incident Response 10 Company Wide Executive Review 11 Communication 11 AWS Access 11 Account Review and Audit 11 Back ground Checks 12 Credentials Policy 12 Secure Design Principles 12 Change Management 12 Software 12 Infrastructure 13 AWS Account Security Features 13 AWS Credentials 14 Passwords 15 AWS Multi Factor Authentication (AWS MFA) 15 Access Keys 16 Key Pairs 17 X509 Certificates 18 Individual User Accounts 18 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 4 of 45 Secu re HTTPS Access Points 19 Security Logs 19 AWS Trusted Advisor Security Checks 20 Networking Services 20 Amazon Elastic Load Balancing Security 20 Amazon Virtual Private Cloud (Amazon VPC) Security 22 Amazon Route 53 Security 28 Amazon CloudFront Security 29 AWS Direct Connect Security 32 Appendix – Glos sary of T erms 33 Document Revisions 44 Jun 2016 44 Nov 2014 44 Nov 2013 44 May 2013 45 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 5 of 45 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence This document is intended to answer questions such as “How does AWS help me protect my data?” Specifically AWS physical and operational security processes are described for the network and server infrastructure under AWS’ management as well as service specific security implementations Shared Security Responsibility Model When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: • What content they choose to store on AWS • Which AWS services are used with the content • In what country that content is stored • The format and structure of that content and whether it is masked anonymised or encrypted • Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Under the shared responsibility model AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of their operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consid er the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate that controls ar e operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 6 of 45 Figure 1: AWS Shar ed Security Responsib ility Model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security features such as individual user accounts and credentials SSL/TLS for data transmissions and user a ctivity logging that you should configure no matter which AWS service you use For more information about these security features see the “AWS Account Security Features” section below AWS Security Responsi bilities AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud This infrastructure is comprised of the hardware software networking and facilities that run AWS services Protecting this infrastructure is AWS ’ number one priority and while you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulatio ns (for more information visit ( awsamazoncom/compliance ) Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon Elastic MapReduce Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS will handle basic security tasks like guest operating system (OS) and database patching firewall configuration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 7 of 45 and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may require additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desktops in minutes instead of weeks You can also use cloudbased analytic s and workflow tools to process y our data as you need it and then store it in the cloud or in your own data centers Whi ch AWS services you use will determ ine how much configuration wor k you have to perform as part of your security responsib ilities AWS products that fall into the well understood category of Infrastructure as a Serv ice (IaaS) such as Amazon EC2 and Amazon VPC are completely under your control and require you to perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amaz on RDS or Amaz on Redshift provide all of the resources you need in order to perform a specific task but without the configuration work that can c ome with them With managed services you don’t have to worr y about laun ching and maintaining instan ces patching the guest OS or database or replicating databases AWS handles that for you However as with all services you shou ld prote ct your AWS Account credentia ls and set up individu al user accounts with Amazon Identity and Access Management (IAM) so that each of your users has their own credentials and you can implement segregation of duties We also recommend usin g mult ifactor authent ication (MFA) with each account requ iring the use of SSL/TLS to commun icate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Sec urity Resources webpage AWS Global Security Infrastructure AWS operates the global cloud infrastructure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that support the provisioning and use of these resources The AWS global infra structure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building web architectures on top of some of the most secure computing infrastr ucture in the world ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 8 of 45 AWS Compliance Program s Amazon Web Services Comp liance enables customers to understand the robust contro ls in place at AWS to maintain security and data protect ion in the cloud As systems are built on top of the AWS cloud infrastructure comp liance responsib ilities will be shared By tying together governance focused audit friend ly service features with applicable comp liance or audit standards AWS Comp liance enab lers build on traditional programs; help ing customers to establish and operate in an AWS security contro l environment The IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT securit y standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA • FedRAMP • DOD SRG Levels 2 and 4 • PCI DSS Level 1 • EU Model Clauses • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • IRAP • FIPS 1402 • MLPS Level 3 • MTCS In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Services ( CJIS ) • Cloud Security Alliance ( CSA ) • Family Educational Rights and Privacy Act ( FERPA ) • Health Insurance Portability and Accountability Act ( HIPAA ) • Motion Picture Association of America ( MPAA ) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 9 of 45 AWS provides a wide range of information regarding its IT control environment to customers through white papers reports certifications accreditations and other thirdparty attestations More information is available in the Risk and Compliance whitepaper available at http://awsamazoncom/compliance/ Physical and Environmental Security AWS’ data centers are state of the art utilizing innovative architectural and engineering approaches AWS has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS dat a centers are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterruptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 10 of 45 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses techniques detailed NIST 800 88 (“Guidelines for Media Sanitization as part of the decommissioning process“) Business Continuity Management AWS’ infrastructure has a high level of availability and provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Amazon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the rem aining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone T his means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 11 of 45 provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Audit group regularly reviews AWS resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Commu nication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees; regular management meetings for updates on business performance and other matters; and electronic means such as video conferencing electronic mail messages and the posting of information via the Amazon int ranet AWS has also implemented various methods of external communication to support its customer base and the communit y M echan isms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard " is available and maintained by the customer support team to alert customers to any issues that may be of broad impact The “AWS Security Center ” is available to provide you with securit y and comp liance details about AWS You can also subscribe to AWS Support offerin gs that include direct commun ication with the customer support team and proacti ve alerts to any c ustomer impacting issues AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazon Corporate network relies on user IDs passwords and Kerberos while the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS cloud components must explicitly request access through the AWS access management sy stem All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatical ly revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 12 of 45 Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automatically revoked Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the empl oyee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Security has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles AWS’ development process follows secure software development best practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration changes to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to AWS’ infrastructure are done to minimize any impact on the customer and their use of the services AWS will communicate with customers either via email or through the AWS Service Health Dashboard (when service use is likely to be adversely affected ) Software AWS applies a systematic approach to managing change so that changes to customerimpacting services are thoroughly revie wed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to the customer Changes deployed into production environments are: • Reviewed: Peer reviews of the technical aspects of a change are required • Tested: Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 13 of 45 • Approved: All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate Perio dically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actio ns are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infrastructure Amazon’s Corporate Applications team develops and manages software to automa te IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team maintains and operates a UNIX/Linux configuration management framework to address hardw are scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change AWS is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configurati on management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM u ser accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 14 of 45 services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentials and their uses: Credentia l Type Use Descrip tion Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters MultiFactor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A sixdigit single use code that is required in addition to your password to log in to your AWS Account or IAM user account Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST /Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmat ic requests that you make to AWS Key Pairs • SSH login to EC2 instances • CloudFront signed URLs • Windows instances To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SSH With Windows instances you use a key pair to obtain the administrator password and then log in using RDP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 15 of 45 X509 Certificates • Digita lly signed SOAP requests to AWS APIs • SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (curren tly used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Credential Report You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials whether they use a password whether their password expires and must b e changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials have been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular basis without any downtime to your application This can help to mit igate risk from lost or compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Passwords are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials page AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies go to Managing Passwords in Using IAM AWS Multi Factor Authentication (AWS MFA) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 16 of 45 AWS Multi Factor Authentication (AWS MFA) is an additional la yer of security for accessing AWS services When you enable this optional feature you will need to provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this single use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the precise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MF A protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additio nal layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA devic e uses a software application that generates six digit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than hardware MFA devices However you should be aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authentication for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this b y adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obta in hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information about AWS MFA is available on the AWS websit e Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 17 of 45 calculation is done for you; otherwise you can have your application calculate it and include it in your REST or Query requests by following the directions in our documentation Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a key that is derived from your secret access key rather than using the secret access key itself In addition you der ive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misused if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your cod e For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 uses public –key cryptography to encrypt and decrypt login information Public –key cryptography uses a public key to encrypt a piece of data such as a password then the recipient uses the private key to decrypt the data The public and private keys are known as a key pair To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SS H With Windows instances you use a key pair to obtain the administrator password and then log in using RDP Creating a Key Pair You can use Amazon EC2 to create your key pair For more information see Creating Your Key Pair Using Amazon EC2 Alternatively you could use a third party tool and then import the public key to Amazon EC2 For more information see Importing Your Own Key Pair to Amazon EC2 Each key pair requires a name Be sure to choose a name that is easy to ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 18 of 45 remember Amazon EC2 associates the public key with the name that you specify as the key name Amazon EC2 stores the public key only and you store the private key Anyone who possesses your private key can decrypt your login information so it's important that you store your private keys in a secure place The keys that Amazon EC2 uses are 2048 bit SSH 2 RSA keys You can have up to five thousand key pairs per region X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date that AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then inc lude that signature in the request along with your certificate AWS verifies that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that y ou uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certifica te (signing certificate) by using third party software In contrast with root account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP reque sts X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a certificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certificate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or y ou can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management ( IAM ) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS resources either programmatically or through the AWS Management ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 19 of 45 Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permissions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section below for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you should use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering a nd forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what hap pened and when but also identify the source To help you with your after thefact investigations and near realtime intrusion detection AWS CloudTrail provides a log of requests for AWS resources within your account for supported services For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures information about every API call to every supported AWS resource including sign in events Once you have enabled CloudTrail event logs are delivered every 5 minutes You can configure CloudTrail so that it aggregates log files from multiple regions into a single Amazon S3 bucket From there you can then upload them to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns By default log files are stored securely in Amazon S3 but you can also archive them to Amazon Glacier t o help meet audit and compliance requirements ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 20 of 45 In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in nea rreal time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Advisor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on sev eral of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your organization to automatically receive a weekly email with an updated status of your Trust ed Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: specific ports unrestricted IAM use and MFA on root account And when you sign up for Busine ss or Enterprise level AWS Support you receive full access to all Trusted Advisor checks Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network connection to the AWS cloud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Amazon Elastic Load Balancing Security Amazon Elastic Load Balancing is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports creation and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 21 of 45 • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Amazon Elastic Load Balancing configures your load balancer with a predefined cipher set that is used for T LS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI SOX etc) from clients to ensure that standards are met In these cases Amazon Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers Y ou can choose to enable or disable the ciphers depending on your specific requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the ciph er suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer will select a cipher suite based on the server’s prioritization of cipher suites rather than the client’s This gives you more c ontrol over the level of security that clients use to connect to your load balancer For even greater communication privacy Amazon Elastic Load Balancer allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Amazon Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTT PS or TCP load balancing Typically client connection information such as IP address and port is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitel ists of IP addresses Amazon Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance tha t processed ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 22 of 45 the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to b ack end instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 instance you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS c loud and launch Amazon EC2 instances that have private (RFC 1918 ) addresses in the range of your choice (eg 10000/16 ) You can define subnets within your VPC group ing simil ar kinds of instances based on IP address rang e and then set up routin g and secur ity to contro l the flow of traffi c in and out of the instan ces and subnets AWS offers a variet y of VPC archite cture templates with configurations that provi de varying levels of public access : • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs and security groups can be used to provide strict control over inbound and outbound network traffic to your instances • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances are not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public su bnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This configuration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instance s run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs usin g a p rivate IP address which allows instances in the two VPCs to communicate with each other as if they are within the same networ k You can c reate a VPC peerin g connect ion between your own VPCs or with a VPC in another AWS account within a single region Security feature s within Amazon VPC inclu de security group s network ACLs routin g tables and externa l gateways Each of these items is complementary to providing a ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 23 of 45 secure isolated network that can be extended throu gh selective enab ling of direct Internet access or private c onnect ivity to another network Am azon EC2 instance s runn ing within an Amazon VPC inherit all of the benefits describ ed belo w related to the guest OS and prote ction again st packet sniffing Note howe ver that you must create VPC securit y groups specifically for your Amazon VPC; any Amaz on EC2 secur ity groups you have created will not work inside your Amazon VPC Also Amaz on VPC securit y groups have additional capab ilities that Amazon EC2 secur ity groups do not have such as bein g able to change the security group after the instance is launched and bein g able to specify any proto col with a standard protoco l number (as opposed to just TCP UDP or ICM P) Each Amaz on VPC is a distinct isolated netwo rk with in the cloud; netwo rk traffi c within each Amazon VPC is isolated from all other Amaz on VPC s At creation time you select an IP address range for each Amazon VPC You may c reate and attach an Internet gateway virtual private gatewa y or both to estab lish externa l connec tivity subject to the contro ls below API Access : Calls to create and delete Amaz on VPCs change routin g securit y group and netwo rk ACL parameters and perform other functions are all signed by y our Amazon Secret Acce ss Key which could be either the AWS Account ’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behal f In addition API calls can be encr ypted with SSL to maintain confidentialit y Amazon recommends alwa ys usin g SSLprote cted API endpo ints AWS IAM also enables a customer to further contro l what APIs a newly created user has perm issions to call Subn ets a nd Rou te Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 securit y attacks including MAC spoo fing and ARP spoo fing are blocked Each subnet in an Amaz on VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routin g table to determ ine the dest ination Firewa ll (Securi ty Groups): Like Amazon EC2 Amazon VPC supports a complete firew all solution enab ling filterin g on both ingress and egress traffic from an instance The default group enables inbound commun ication from other members of the same group and outbound communication to any destination Traffic can be restricted by any IP protoco l by service port as well as source/destination IP address (individu al IP or Classless InterDomain Routin g (CIDR) block) The firewall isn’t contro lled throu gh the guest OS; rather it can be mod ified only throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrati ve functions on the instances and the firewall therefore enab ling you to implement additional securit y throu gh separation ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 24 of 45 of duties The level of secur ity afforded by the firew all is a function of which ports you open and for what durat ion and purpose Wellinformed traffi c management and secur ity design are still required on a perinstance basis AWS further encourages you to apply additional perinstance filters with host based firewal ls such as IPtables or the Win dows Firewall Figure 5: A mazon VPC Netwo rk Architectu re Netwo rk Access Control Lists: To add a further layer of secur ity within Amazon VPC you can c onfigure netwo rk ACLs These are stateless traffi c filters that apply to all traffi c inbound or outbound from a subnet within Amazon VPC These ACL s can contain ordered rules to allow or deny traffic based upon IP protoco l by se rvice port as well as source/destination IP address Like securit y groups networ k ACL s are managed throu gh Amazon VPC APIs adding an additional layer of protection and enab ling additional securit y throu gh separation of duties The diagram below depicts how the secur ity contro ls above interrelate to enab le flexible networ k topo logies while providing c omplete contro l over networ k traffic flows ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 25 of 45 Figure 6: Flexible N etwo rk Topologies Virtual Priv ate Gateway: A virtual private gateway enables private connec tivity between the A mazon VPC and another netwo rk Netwo rk traffic with in each virtual private gatewa y is isolated from netwo rk traff ic within all other virtual private gateways You can estab lish VPN connect ions to the virtual private gateway from gateway devices at your prem ises Each connection is secured by a preshared key in conjun ction with the IP address of the customer gatewa y device Internet Gateway: An Internet gateway may be attached to an A mazon VPC to enable direct connect ivity to Amazon S3 other AWS services and the Internet Each instance desirin g this access must either have an Elas tic IP asso ciated with it or route traffi c throu gh a NAT instance Additionally netwo rk routes are configured (see above) to direct traffic to the Internet gateway AWS provides reference NAT AMIs that you can extend to perform networ k logging deep packet inspection application layer filterin g or other securit y contro ls This access can only be mod ified throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrative functions on the instances and the Internet gateway therefore enab ling y ou to implement additional secur ity throu gh separation of duties ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 26 of 45 Dedic ated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on singletenant hardware ) An A mazon VPC can be created with ‘dedicated ’ tenan cy so that all instances launched into the Amazon VPC will utiliz e this feature Alternativel y an Amazon VPC may be created with ‘default ’ tenan cy but you can specif y dedicated tenan cy for parti cular instances launched into it Elastic Netwo rk Interfa ces: Each Amaz on EC2 instance has a default networ k interface that is assigned a private IP address on your Amazon VPC netwo rk You can c reate and attach an additional netwo rk interface known as an elasti c netwo rk interface (ENI) to any Amazon EC2 instance in your Amazon VPC for a total of two netwo rk interfaces per instance Attach ing more than one networ k interface to an instance is useful when you want to create a management netwo rk use netwo rk and security appliances in your Amazon VPC or create dualhomed instances with workloads/ro les on distin ct subnets An ENI' s attributes including the private IP address elastic IP addresses and MAC address will follow the ENI as it is attached or detached from an instance and reattached to another instance More information about Amazon VPC is availab le on the AWS website: http:/ /awsamaz oncom/ vpc/ Addi tiona l Netwo rk Access Control wi th EC2VPC If you launch instances in a region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatic ally provisioned in a ready touse default VPC You can c hoose to create additional VPCs or you can create VPCs for instances in regions where you alread y had instances before we launched EC2VPC If you create a VPC later using regular VPC you specif y a CIDR block create subnets enter the routin g and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automati cally performed for you When you launch an instance into a default VPC usin g EC2VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone • Create an Internet gateway and connect it to your default VPC • C reate a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also recei ve a publi c IP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 27 of 45 The following table summa rizes the differences between instances launched into EC2 Classic instan ces launched into a default VPC and instan ces launched into a nondefault VPC Charac teristic EC2Classic EC2VPC (Default VPC ) Regula r VPC Public IP address Your instance receives a public IP address Your instance launched in a default subnet receives a public IP address by default unless you specify otherwise during launch Your instance doesn't receive a public IP address by default unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2 Classic range each time it's started Your instance receives a stat ic private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple p rivate IP addresses We select a single IP address for your instance Multiple IP addre sses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A secu rity group can reference secu rity groups that belong to other AWS accounts A secu rity group can reference secu rity groups for your VPC only A secu rity group can reference secu rity groups for your VPC only Security group association You must terminate your instance to change its secu rity group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inbound traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 28 of 45 Tenancy Your instance runs on shared hard ware; you cannot run an instance on single tenant hard ware You can run your instance on shared hardware or single tenant hard ware You can run your instance on shared hardware or single tenant hardware Note that security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC yo u can change security groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS queries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure outside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) — all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to endpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) D uring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 29 of 45 information from the public Whois database in order to help thwart scraping and spamming Amazon Route 53 is buil t using AWS’ highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsi ve Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by c reating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latency and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possible performance Amazon CloudFront is optimized to work with other AWS services lik e Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive ver sions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original defin itive copies of objects delivered by Amazon CloudFront If you want contro l over who is able to downlo ad content from Am azon CloudFront you can enab le the service’s private content feature This feature has two ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 30 of 45 components : the first controls how content is delivered from the Amazon CloudFront edge lo cation to view ers on the Internet The second controls how the Amaz on Cloud Front edge locati ons access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To control access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Console Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create po licy documents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the c lient making the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects Whe n Amazon CloudFront receives a request it will decode the signature using your public key Amazon CloudFront will only serve requests that have a valid policy document and matching signature Note that private content is an optional feature that must be enabled when you set up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront will acc ept requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 31 of 45 Figure 7: A mazon CloudFront Encrypted Transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the objects For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront supports the TLSv11 and TLSv12 protocols for HTTPS connections between CloudFront and your custom origin webserver (along with SSLv3 and TLSv10) and a selection of cipher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origi n ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note that if you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a th ird party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTTPS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or Dedicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content because some older browsers do not support SNI With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests fo r content including the object requested the date and time of the request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 32 of 45 the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provide a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) With AWS Direct Connect you bypass Internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS Direct Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest A WS region You can access all AWS services available in that region AWS Direct Connect locations in the US can also access the public endpoints of the other AWS regions using a public virtual interface Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resou rces such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments AWS Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash using your secret key Y ou can have AWS automatically generate a BGP MD5 key or you can provide your own Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 33 of 45 Appen dix – Glos sary of Terms Access Key ID : A string that AWS distributes in order to uniquely identify each AWS user; it is an alphanumeric token associated with your Secret Access Key Access control list (ACL) : A list of permissions or rules for accessing an object or network resource In Amazon EC2 security groups act as ACLs at the instance level controlling which users have permission to access specific instances In Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users In Amazon VPC ACLs act like network firewalls and control access at the subnet level AMI : An Amazon Machine Image (AMI) is an encrypted machine image stored in Amazon S3 It contains all the information necessary to boot instances of a customer’s software API : Application Programming Interface (API) is an interface in computer science that defines the ways by which an application program may request services from libraries and/or operating systems Archive : An archive in Amazon Glacier is a file that you want to store and is a base unit of storage in Amazon Glacier It can be any data such as a photo video or document Each archive has a unique ID and an optional description Authentication : Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Not only do users need to be authenticat ed but every program that wants to call the functionality exposed by an AWS API must be authenticated AWS requires that you authenticate every request by digitally signing it using a cryptographic hash function Auto Scaling : An AWS service that allows customers to automatically scale their Amazon EC2 capacity up or down according to conditions they define Availability Zone : Amazon EC2 locations are composed of regions and availability zones Availability zones are distinct locations that are engineere d to be insulated from ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 34 of 45 failures in other availability zones and provide inexpensive low latency network connectivity to other availability zones in the same region Bastion host : A computer specifically configured to withstand attack usually placed on the external/public side of a demilitarized zone (DMZ) or outside the firewall You can set up an Amazon EC2 instance as an SSH bastion by setting up a public subnet as part of an Amazon VPC Bucket : A container for objects stored in Amazon S3 Every object is contained within a bucket For example if the object named photos/puppyjpg is stored in the johnsmith bucket then it is addressable using the URL: http ://johnsmiths3amazonawscom/photos/pupp yjpg Certific ate: A credential that some AWS products use to authenticate AWS Accounts and users Also known as an X509 certificate The certificate is paired with a private key CIDR Block : Classless Inter Domain Routing Block of IP addresses Client side encryption : Encrypting data on the client side before uploading it to Amazon S3 CloudFormation: An AWS provisioning tool that lets customers record the baseline configuration of the AWS resources needed to run their applications so that they can provision and update them in an orderly and predictable fashion Cognito : An AWS service that simplifies the task of authenticating users and storing managing and syncing their data across multiple devices platforms and applications It works with multiple existing identity providers and also supports unauthenticated guest users Credentials : Items that a user or process must have in order to confirm to AWS services during the authentication process that they are au thorized to access the service AWS credentials include passwords secret access keys as well as X509 certificates and multi factor tokens Dedicated instance : Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) Digital signature : A digital signature is a cryptographic method for demonstrating the authenticity of a digital message or document A valid digital signature gives a recipient reason to believe that the message was create d by an authorized sender and that it was not altered in transit Digital signatures are used ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 35 of 45 by customers for signing requests to AWS APIs as part of the authentication process Direct Connect Service : Amazon service that allows you to provision a direct link between your internal network and an AWS region using a high throughput dedicated connection With this dedicated connection in place you can then create logical connections directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC bypassing Internet service providers in the network path DynamoDB Service : A managed NoSQL database service from AWS that provides fast and predictable performance with seamless scalability EBS : Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are off instance storage that persists independently from the life of an instance ElastiCache: An AWS web service that allows you to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fast managed in memory caching system instead of relying entirely on slower disk based databases Elastic Beanstalk : An AWS deployment and management tool that automates the functions of capacity provisioning load balancing and auto scaling for customers’ applications Elastic IP Address : A static public IP address that you can assign to any instance in an Amazon VPC thereby making the instance public Elastic IP addresses also enable you to mask instance failures by rapidly remapping your public IP addresses to any instance in the VPC Elastic Load Balancing : An AWS service that is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits such as taking over the encr yption/decryption work from EC2 instances and managing it centrally on the load balancer Elastic MapReduce (EMR) Service: An AWS service that utilizes a hosted Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 Elastic MapReduce enables customers to easily and cost effectively process extremely large quantities of data (“big data”) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 36 of 45 Elastic Network Interface : Within an Amazon VPC an Elastic Network Interface is an optional second network interface that you can attach to an EC2 instance An Elastic Network Interface can be useful for creating a management network or using network or security appliances in the Amazon VPC It can be easily detached from an instance and reattached to another instance Endpoint : A URL that is the entry point for an AWS service To reduce data latency in your applications most AWS services allow you to select a regional endpoint to make your requests Some web services allow you to use a general endpoint that doesn't specify a region; these generic endpoints resolve to the service's us east1 endpoint You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Federated users : User systems or applications that are not currently authorized to access your AWS services but that you want to give temporary access to This access is provided using the AWS Security Token Service (STS) APIs Firewall : A hardware or software component that controls incoming and/or outgoing network traffic according to a specific set of rules Us ing firewall rules in Amazon EC2 you specify the protocols ports and source IP address ranges that are allowed to reach your instances These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destin ation Traffic can be restricted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) Guest OS : In a virtual machine environment multiple operating systems can run on a single piece of hardware Each one of these instances is considered a guest on the host hardware and utilizes its own OS Hash : A cryptographic hash function is used to calculate a digital signature for signing requests to AWS APIs A cryptographic h ash is a one way function that returns a unique hash value based on the input The input to the hash function includes the text of your request and your secret access key The hash function returns a hash value that you include in the request as your signa ture HMAC SHA1/HMAC SHA256 : In cryptography a keyed Hash Message Authentication Code (HMAC or KHMAC) is a type of message authentication code (MAC) calculated using a specific algorithm involving a cryptographic hash function ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 37 of 45 in combination with a secr et key As with any MAC it may be used to simultaneously verify both the data integrity and the authenticity of a message Any iterative cryptographic hash function such as SHA 1 or SHA 256 may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC SHA1 or HMAC SHA256 accordingly The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function on the size and quality of the key and the size of the hash output length in bits Hard ware security module (HSM) : An HSM is an appliance that provides secure cryptographic key storage and operations within a tamper resistant hardware device HSMs are designed to securely store cryptographic key material and use the key material without expo sing it outside the cryptographic boundary of the appliance The AWS CloudHSM service provides customers with dedicated single tenant access to an HSM appliance Hypervisor : A hypervisor also called Virtual Machine Monitor (VMM) is computer software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently Identity and Access Management (IAM) : AWS IAM enables you to create multiple users and manage the permissions for each of these users wi thin your AWS Account Identity pool : A store of user identity information in Amazon Cognito that is specific to your AWS Account Identity pools use IAM roles which are permissions that are not tied to a specific IAM user or group and that use temporary security credentials for authenticating to the AWS resources defined in the role Identity Provider : An online service responsible for issuing identification information for users who would like to interact with the service or with other cooperating serv ices Examples of identity providers include Facebook Google and Amazon Import/Export Service : An AWS service for transferring large amounts of data to Amazon S3 or EBS storage by physically shipping a portable storage device to a secure AWS facility Instance : An instance is a virtualized server also known as a virtual machine (VM) with its own hardware resources and guest OS In EC2 an instance represents one running copy of an Amazon Machine Image (AMI) IP address : An Internet Protocol (IP) address is a numerical label that is assigned ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 38 of 45 to devices participating in a computer network utilizing the Internet Protocol for communication between its nodes IP spoofing : Creation of IP packets with a forged source IP address called spoofing with the p urpose of concealing the identity of the sender or impersonating another computing system Key : In cryptography a key is a parameter that determines the output of a cryptographic algorithm (called a hashing algorithm) A key pair is a set of security credentials you use to prove your identity electronically and consists of a public key and a private key Key rotation : The process of periodically changing the cryptographic keys used for encrypting data or digitally signing requests Just like changing pas swords rotating keys minimizes the risk of unauthorized access if an attacker somehow obtains your key or determines the value of it AWS supports multiple concurrent access keys and certificates which allows customers to rotate keys and certificates into and out of operation on a regular basis without any downtime to their application Mobile Analytics : An AWS service for collecting visualizing and understanding mobile application usage data It enables you to track customer behaviors aggregate metrics and identify meaningful patterns in your mobile applications Multi factor authentication (MFA) : The use of two or more authentication factors Authentication factors include something you know (like a password) or something you have (like a token that generates a random number) AWS IAM allows the use of a six digit single use code in addition to the user name and password credentials Customers get this single use code from an authentication device that they keep in their physical possession (ei ther a physical token device or a virtual token from their smart phone) Network ACLs : Stateless traffic filters that apply to all traffic inbound or outbound from a subnet within an Amazon VPC Network ACLs can contain ordered rules to allow or deny traf fic based upon IP protocol by service port as well as source/destination IP address Object : The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of nam evalue pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as Content Type The developer can also specify custom metadata at the time the Object is stored ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 39 of 45 Paravirtualization : In computing paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware Peering : A VPC peering connection is a networking connection betw een two VPCs that enables you to route traffic between them using private IP addresses Instances in either VPC can communicate with each other as if they are within the same network Port scanning : A port scan is a series of messages sent by someone atte mpting to break into a computer to learn which computer network services each associated with a "well known" port number the computer provides Region: A named set of AWS resources in the same geographical area Each region contains at least two availab ility zones Replication : The continuous copying of data from a database in order to maintain a second version of the database usually for disaster recovery purposes Customers can use multiple AZs for their Amazon RDS database replication needs or use Read Replicas if using MySQL Relational Database Service (RDS) : An AWS service that allows you to create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS i s available for Amazon Aurora MySQL PostgreSQL Oracle Microsoft SQL Server and MariaDB database engines Role : An entity in AWS IAM that has a set of permissions that can be assumed by another entity Use roles to enable applications running on your Amazon EC2 instances to securely access your AWS resources You grant a specific set of permissions to a role use the role to launch an Amazon EC2 instance and let EC2 automatically handle AWS credential management for your applications that run on Amazo n EC2 Route 53: An authoritative DNS system that provides an update mechanism that developers can use to manage their public DNS names answering DNS queries and translating domain names into IP address so computers can communicate with each other Secr et Access Key : A key that AWS assigns to you when you sign up for an AWS Account To make API calls or to work with the command line interface each AWS user needs the Secret Access Key and Access Key ID The user signs each request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 40 of 45 with the Secret Access Key and includes the Access Key ID in the request To help ensure the security of your AWS Account the Secret Access Key is accessible only during key and user creation You must save the key (for example in a text file that you store securely) if you wa nt to be able to access it again Security group : A security group gives you control over the protocols ports and source IP address ranges that are allowed to reach your Amazon EC2 instances; in other words it defines the firewall rules for your instan ce These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Security Token Service (STS) : The AWS STS APIs return temporary security credentials consisting of a security token an Access Key ID and a Secret Access Key You can use STS to issue security credentials to users who need temporary access to your resources These users can be existing IAM users non AWS users (federated identities) systems or applications that need to a ccess your AWS resources Server side encryption (SSE) : An option for Amazon S3 storage for automatically encrypting data at rest With Amazon S3 SSE customers can encrypt data on upload simply by adding an additional request header when writing the object Decryption happens automatically when data is retrieved Service: Software or computing ability provided across a network (eg Amazon EC2 Amazon S3) Shard : In Amazon Kinesis a shard is a uniquely identified group of data records in an Amazon Kinesis stream A Kinesis stream is composed of multiple shards each of which provides a fixed unit of capacity Signature : Refers to a digital signature which is a mathematical way to confirm the authenticity of a digital message AWS uses signatures c alculated with a cryptographic algorithm and your private key to authenticate the requests you send to our web services Simple Data Base (Simple DB) : A non relational data store that allows AWS customers to store and query data items via web services req uests Amazon SimpleDB creates and manages multiple geographically distributed replicas of the customer’s data automatically to enable high availability and data durability Simple Email Service (SES) : An AWS service that provides a scalable bulk and tran sactional email sending service for businesses and developers In order to maximize deliverability and dependability for senders Amazon SES takes proactive ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 41 of 45 steps to prevent questionable content from being sent so that ISPs view the service as a trusted e mail origin Simple Mail Transfer Protocol (SMTP) : An Internet standard for transmitting email across IP networks SMTP is used by the Amazon Simple Email Service Customers who used Amazon SES can use an SMTP interface to send email but must connect to an SMTP endpoint via TLS Simple Notification Service (SNS) : An AWS service that makes it easy to set up operate and send notifications from the cloud Amazon SNS provides developers with the ability to publish messages from an application and immediate ly deliver them to subscribers or other applications Simple Queue Service (SQS) : A scalable message queuing service from AWS that enables asynchronous message based communication between distributed components of an application The components can be com puters or Amazon EC2 instances or a combination of both Simple Storage Service (Amazon S3) : An AWS service that provides secure storage for object files Access to objects can be controlled at the file or bucket level and can further restricted based on other conditions such as request IP source request time etc Files can also be encrypted automatically using AES 256 encryption Simple Workflow Service (SWF) : An AWS service that allows customers to build applications that coordinate work across distri buted components Using Amazon SWF developers can structure the various processing steps in an application as “tasks” that drive work in distributed applications Amazon SWF coordinates these tasks managing task execution dependencies scheduling and concurrency based on a developer’s application logic Single sign on: The capability to log in once but access multiple applications and systems A secure single sign on capability can be provided to your federated users (AWS and non AWS users) by creating a URL that passes the temporary security credentials to the AWS Management Console Snapshot : A customer initiated backup of an EBS volume that is stored in Amazon S3 or a customer initiated backup of an RDS database that is stored in Amazon RDS A snaps hot can be used as the starting point for a new EBS volume or Amazon RDS database or to protect the data for long term durability and recovery Secure Sockets Layer (SSL) : A cryptographic protocol that provides security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 42 of 45 over the Internet at the Application Layer Both the TLS 10 and SSL 30 protocol specifications use cryptographic mechanisms to implement the security services that establish and maintain a secure TCP/IP connection The secure connection prevents eavesdropping tampering or message forgery You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Stateful firewall : In computing a stateful firewall (any firewall that performs stateful packet inspection (SPI) or stateful inspection) is a firewall that keeps track of the state of network connections (such as TCP streams UDP communication) traveling across it Storage Gateway : An AWS service that securely connects a customer’s on premises software appliance with Amazon S3 storage by using a VM that the custome r deploys on a host in their data center running VMware ESXi Hypervisor Data is asynchronously transferred from the customer’s on premises storage hardware to AWS over SSL and then stored encrypted in Amazon S3 using AES 256 Temporary security credenti als: AWS credentials that provide temporary access to AWS services Temporary security credentials can be used to provide identity federation between AWS services and non AWS users in your own identity and authorization system Temporary security credentials consist of security token an Access Key ID and a Secret Access Key Transcoder : A system that transcodes (converts) a media file (audio or video) from one format size or quality to another Amazon Elastic Transcoder makes it easy for customers to transcode video files in a scalable and cost effective fashion Transport Layer Security (TLS) : A cryptographic protocol that provides security over the Internet at the Application Layer Customers who used Amazon’s Simple Email Service must connect to an SMTP endpoint via TLS Tree hash : A tree hash is generated by computing a hash for each megabyte sized segment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data Amazon Glacier checks the ha sh against the data to help ensure that it has not been altered en route Vault : In Amazon Glacier a vault is a container for storing archives When you create a vault you specify a name and select an AWS region where you want to create the vault Each vault resource has a unique address Versioning : Every object in Amazon S3 has a key and a version ID Objects with ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 43 of 45 the same key but different version IDs can be stored in the same bucket Versioning is enabled at the bucket layer using PUT Bucket versio ning Virtual Instance : Once an AMI has been launched the resulting running system is referred to as an instance All instances based on the same AMI start out identical and any information on them is lost when the instances are terminated or fail Virt ual MFA : The capability for a user to get the six digit single use MFA code from their smart phone rather than from a token/fob MFA is the use of an additional factor (the single use code) in conjunction with a user name and password for authentication Virtual Private Cloud (VPC) : An AWS service that enables customers to provision an isolated section of the AWS cloud including selecting their own IP address range defining subnets and configuring routing tables and network gateways Virtual Private N etwork (VPN): The capability to create a private secure network between two locations over a public network such as the Internet AWS customers can add an IPsec VPN connection between their Amazon VPC and their data center effectively extending their dat a center to the cloud while also providing direct access to the Internet for public subnet instances in their Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side WorkSpaces : An AWS managed desktop service that enables you to provision cloud based desktops for your users and allows them to sign in using a set of unique credentials or their regular Active Directory credentials X50 9: In cryptography X509 is a standard for a Public Key Infrastructure (PKI) for single sign on and Privilege Management Infrastructure (PMI) X509 specifies standard formats for public key certificates certificate revocation lists attribute certificates and a certification path validation algorithm Some AWS products use X509 certificates instead of a Secret Access Key for access to certain interfaces For example Amazon EC2 uses a Secret Access Key for access to its Query interface but it uses a signing certificate for access to its SOAP interface and command line tool interface WorkDocs : An AWS managed enterprise storage and sharing service with feedback capabilities for user collaboration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 44 of 45 Document Revisions Jun 2016 • Updated compliance programs • Updated regions Nov 2014 • Updated compliance programs • Updated shared security responsibility model • Updated AWS Account security features • Reorganized services into categories • Updated several services with new features: CloudWatch CloudTrail CloudFront EBS ElastiCache Redshift Route 53 S3 Trusted Advisor and WorkSpaces • Added Cognito Security • Added Mobile Analytics Security • Added WorkDocs Security Nov 2013 • Updated regions • Updated several services with new features: CloudFront DirectConnect DynamoDB EBS ELB EMR Amazon Glacier IAM OpsWorks RDS Redshift Route 53 Storage Gateway and VPC • Added AppStream Security • Added CloudTrail Security • Added Kinesis Security • Added WorkSpaces Security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 45 of 45 May 2013 • Updated IAM to incorporate roles and API access • Updated MFA for API access for customer specified privileged actions • Updated RDS to add event notification multi AZ and SSL to SQL Server 2012 • Updated VPC to add multiple IP addresses static routing VPN and VPC By Default • Updated several other services with new features: CloudFront CloudWatch EBS ElastiCache Elastic Beanstalk Route 53 S3 Storage Gateway • Added Glacier Security • Added Redshift Security • Added Data Pipeline Security • Added Transcoder Security • Added Trusted Advisor Security • Added OpsWorks Security • Added CloudHSM Security
|
General
|
consultant
|
Best Practices
|
Introduction_to_AWS_Security
|
Introduction to AWS Security January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Security of the AWS Infrastructure 1 Security Products and Features 2 Infrastructure Security 2 Inventory and Configuration Management 2 Data Encryption 3 Identity and Acces s Control 3 Monitoring and Logging 4 Security Products in AWS Marketplace 4 Security Guidance 4 Compliance 6 Further Reading 7 Document Revisions 8 Abstract Amazon Web Services (AWS) delivers a scalable cloud computing platform designed for high availability and dependability providing the tools that enable you to run a wide range of applications Helping to protect the confidentiality integrity and availability of your systems and data is of the utmost importance to AWS as is maintaining your trust and confidence This document is intended to provide an introduction to AWS’s approach to security including the controls in the AWS environment and some of the products and features that AWS makes available to customers to meet your security objectives Amazon Web Services Introduction to AWS Security Page 1 Security of the AWS Infrastructure The AWS infrastructu re has been architected to be one of the most flexible and secure cloud computing environments available today It is designed to provide an extremely scalable highly reliable platform that enables customers to deploy applications and data quickly and sec urely This infrastructure is built and managed not only according to security best practices and standards but also with the unique needs of the cloud in mind AWS uses redundant and layered controls continuous validation and testing and a substantial amount of automation to ensure that the underlying infrastructure is monitored and protected 24x7 AWS ensures that these controls are replicated in every new data center or service All AWS customers benefit from a data center and network architecture bui lt to satisfy the requirements of our most security sensitive customers This means that you get a resilient infrastructure designed for high security without the capital outlay and operational overhead of a traditional data center AWS operates under a shared security responsibility model where AWS is responsible for the security of the underlying cloud infrastructure and you are responsible for securing workloads you deploy in AWS ( Figure 1) This gives you the flexibility and agility you need to implement the most applicable security controls for your business functions in the AWS environment You can tightly restrict access to environments that process sensitive data or deploy less stringent controls for information you want to make public Figure 1: AWS Shared Security Responsibility Model Amazon Web Services Introduction to AWS Security Page 2 Security Products and Features AWS and its partners offer a w ide range of tools and features to help you to meet your security objectives These tools mirror the familiar controls you deploy within your on premises environments AWS provides security specific tools and features across network security configuration management access control and data security In addition AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment Infrastructure Security AWS provides several security capabilities and services to increase privacy and control network access These include: • Network firewalls built into Amazon VPC let you create private networks and control access to your instances or applications Customers can control encryption in transit with TLS acros s AWS services • Connectivity options that enable private or dedicated connections from your office or on premises environment • DDoS mitigation technologies that apply at layer 3 or 4 as well as layer 7 These can be applied as part of application and con tent delivery strategies • Automatic encryption of all traffic on the AWS global and regional networks between AWS secured facilities Inventory and Configuration Management AWS offers a range of tools to allow you to move fast while still enabling you to ensure that your cloud resources comply with organizational standards and best practices These include: • Deployment tools to manage the creation and decommissioning of AWS resources according to organization standards • Inventory and configuration managemen t tools to identify AWS resources and then track and manage changes to those resources over time • Template definition and management tools to create standard preconfigured hardened virtual machines for EC2 instances Amazon Web Services Introduction to AWS Security Page 3 Data Encryption AWS offers you the ab ility to add a layer of security to your data at rest in the cloud providing scalable and efficient encryption features These include: • Data at rest encryption capabilities available in most AWS services such as Amazon EBS Amazon S3 Amazon RDS Amazon Redshift Amazon ElastiCache AWS Lambda and Amazon SageMaker • Flexible key management options including AWS Key Management Service that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your own keys • Dedicated hardware based cryptographic key storage using AWS CloudHSM allowing you to help satisfy your compliance requirements • Encrypted message queues for the transmission of sensitive data using server side encryption (SSE) for Amazon SQS In addition AWS provides APIs for you to integrate encryption and data protection with any of the services you develop or deploy in an AWS environment Identity and Access Control AWS offers you capabilities to define enforce and manage user access policie s across AWS services These include: • AWS Identity and Access Management (IAM) lets you define individual user accounts with permissions across AWS resources AWS Multi Factor Authentication for privileged accounts including options for software and hardware based authenticators IAM can be used to grant your employees and applicat ions federated access to the AWS Management Console and AWS service APIs using your existing identity systems such as Microsoft Active Directory or other partner offering • AWS Directory Service allows you to integrate and federate with corporate directories to reduce administrative overhead and improve end user experience • AWS Single Si gnOn (AWS SSO ) allows you to manage SSO access and user permissions to all of your accounts in AWS Organizations centrally AWS provides native identity and access management integration across many of its services plus API integration with any of your own applications or services Amazon Web Services Introduction to AWS Security Page 4 Monitoring and Logging AWS provides tools and features that enable you to see what’s happening in your AWS environment These include: • With AWS CloudTrail you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command lin e tools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address the calls were made from and when the calls occurred • Amazon CloudWatch provides a reliable scalable and flexible monitoring solution that you can start using within minutes You no longer need to set up manage and scale your own monitoring systems and infrastructure • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads Amazon GuardDuty exposes notifications via Amazon CloudWatch so you can trigger an automated response or notify a human These tools and features give you the visibility you need to spot issues before they impact the business and allow you to improve security posture and reduce the risk profile of your environment Security Products in AWS Marketplace Moving production workloads to AWS can enable organizations to improve agility scalability innovation and cost savings — while maintaining a secure environment AWS Marketplace offers security industry leading products that are equivalent identical to or integrate with existing controls in your on premises environments These products complemen t the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments Security Guidance AWS provides customers with guidance and expertise through online too ls resources support and professional services provided by AWS and its partners Amazon Web Services Introduction to A WS Security Page 5 AWS Trusted Advisor is an online tool that acts like a customized cloud expert helping you to configure your resources to follow best practices Trusted Advisor inspects y our AWS environment to help close security gaps and finds opportunities to save money improve system performance and increase reliability AWS Account Teams provide a first point of contact guiding you through your deployment and implementation and po inting you toward the right resources to resolve security issues you may encounter AWS Enterprise Support provides 15 minute response time and is available 24×7 by phone chat or email; along with a dedicated Technical Account Manager This concierge ser vice ensures that customers’ issues are addressed as swiftly as possible AWS Partner Network offers hundreds of industry leading products that are equivalent identical to or integrated w ith existing controls in your on premises environments These products complement the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments as well as hundreds of certified AWS Consulting Partners worldwide to help with your security and compliance needs AWS Professional Services houses a Security Risk and Compliance specialty practice to help you d evelop confid ence and technical capability when m igrating your most sensitive workloads to the AWS Cloud AWS Professional Services helps customers develop securi ty policies and practice s based on well proven designs and helps ensure that cus tomers’ security design meets internal and external compliance requirements AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find test buy and deploy software that runs on AWS AWS Marketplace Security products complement the existing AWS services to enable you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on premises environments AWS Security Bulletins provides security bulletins around current vulnerabilities and threats and enables customers to work with AWS security experts to address concerns like report ing abuse vulnerabilities and penetration testing We also have online resources for vulnerability reporting AWS Security Documentation shows how to configure AWS services to meet your security and compliance objectives AWS customers benefit from a data center and Amazon Web Services Introduction to AWS Security Page 6 network architecture that are built to meet the requirements of the most security sensitive organizations AWS Well Architected Framework helps cloud architects build secure high performing resilient and efficient infrastructure for their applications The AWS Well Architected Framework includes a security pillar that focuses on protecting information and systems Key topics include confidentiality and integrity of data identifying and managing who can do what with privilege management protecting systems and establ ishing controls to detect security events Customers can use the Well Architected service from the console or engage the services of one of the APN partners to assist them AWS Well Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices This free tool is available in the AWS Management Console and after answering a set of questions regarding operational excellence security reliability performance efficiency and cost optimization The AWS Well Architected Tool then provides a plan on how to architect for the cloud using established best practices Compliance AWS Compliance empowers customers to understand the robust controls in place at AWS to maintain security a nd data protection in the AWS Cloud When systems are built in the AWS Cloud AWS and customers share compliance responsibilities AWS computing environments are continuously audited with certifications from accreditation bodies across geographies and ver ticals including SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) SOC 2 SOC 3 ISO 9001 / ISO 27001 FedRAMP DoD SRG and PCI DSS Level 1i Additionally AWS also has assurance programs that provide templates and control mappings to help customers establish t he compliance of their environments running on AWS for a full list of programs see AWS Compliance Programs We can confirm that all AWS services can be used in compliance with the GDPR This means that in addition to benefiting from all of the measures that AWS already takes to maintain services security customers can deploy AWS services as a part of their GDPR compliance plans AWS offers a GDPR compliant Data Processing Addendum (GDPR DPA) e nabling you to comply with GDPR contractual obligations The AWS GDPR DPA is incorporated into the AWS Service Terms and applies automatically to all customers globally who require it to comply with the GDPR Amazoncom Inc is certified under the EUUS P rivacy Shield and AWS is covered under this certification This helps Amazon Web Services Introduction to AWS Security Page 7 customers who choose to transfer personal data to the US to meet their data protection obligations Amazoncom Inc’s certification can be found on the EU US Privacy Shield website: https://wwwprivacyshieldgov/list By operating in an accredited environment customers reduce the scope and cost of audits they need to perform AWS continuously undergoes assessments of its underlying infra structure —including the physical and environmental security of its hardware and data centers —so customers can take advantage of those certifications and simply inherent those controls In a traditional data center common compliance activities are often ma nual periodic activities These activities include verifying asset configurations and reporting on administrative activities Moreover the resulting reports are out of date before they are even published Operating in an AWS environment allows customers to take advantage of embedded automated tools like AWS Security Hub AWS Config and AWS CloudTrail for validating compliance These tools reduce the effort needed to perform audits since these tasks become routine ongoing and automated By spending les s time on manual activities you can help evolve the role of compliance in your company from one of a necessary administrative burden to one that manages your risk and improves your security posture Further Reading For additional information see the fol lowing resources: For information on … See Key topics research areas and training opportunities for cloud security on AWS AWS Cloud Security Learning The AWS Cloud Adoption Framework which organizes guidance into six areas of focus: Business People Governance Platform Security and Operations AWS Cloud Adoption Framework Specific controls in place at AWS; how to integrate AWS into your existing framework Amazon Web Services: Risk and Compliance Best practices guidance on how to deploy security controls within an AWS environment AWS Security Best Practices Amazon Web Services Introduction to AWS Security Page 8 For information on … See AWS Well Architected Framework security pillar AWS Well Architected Framework Security Pillar Document Revisions Date Description January 2020 Updated for latest services resources and technologies July 2015 First publication
|
General
|
consultant
|
Best Practices
|
Introduction_to_DevOps_on_AWS
|
Introduction to DevOps on AWS October 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Continuous Integration 2 AWS CodeCommit 2 AWS CodeBuild 3 AWS CodeArtifact 3 Continuous Delivery 4 AWS CodeDeploy 4 AWS CodePipeline 5 Deployment Strateg ies 6 BlueGreen Deployments 7 Canary Deployments 7 Linear Deployments 7 Allatonce Deployments 7 Deployment Strategies Matrix 7 AWS Elastic Beanstalk Deployment Strategies 8 Infrastructure as Code 9 AWS CloudFormation 10 AWS Cloud Development Kit 12 AWS Cloud Development Kit for Kubernetes 12 Automation 12 AWS OpsWorks 13 AWS Elastic Beanstalk 14 Monitoring and Logging 15 Amazon CloudWatch Metrics 15 Amazon CloudWatch Alarms 15 Amazon CloudWatch Logs 15 Amazon CloudWatch Logs Insights 16 Amazon CloudWatch Events 16 Amazon EventBridge 16 AWS CloudTrail 17 Communication and Collaboration 18 TwoPizza Teams 18 Security 19 AWS Shared Responsibilit y Model 19 Identity Access Management 20 Conclusion 21 Contributors 21 Document Revisions 22 Abstract Today more than ever enterpr ises are embarking on their digital transformation journey to build deeper connections with their customers to achieve sustainable and enduring business value Organizations of all shapes and sizes are disrupting their competitors and entering new markets by innovating more quickly than ever before For these organization s it is important to focus on innovation and software disruption making it critical to streamline their software delivery Organizations that shorten their time from idea to production making speed and agility a priority could be tomorrow's disruptors While there are several factors to consider in becoming the next digital disruptor this white paper focuses on DevOps and the services and features in the AWS platform that will help increase an organization's ability to deliver applications and services at a high velocity Amazon Web Services Introduction to DevOps on AWS 1 Introduction DevOps is the combination of cultural engineering practices and pat terns and tools that increase an organization's ability to deliver applications and services at high velocity and better quality Over time several essential practices have emerged when adopting DevOps: Continuous Integration Continuous Delivery Infrast ructure as Code and Monitoring and Logging This paper highlights AWS capabilities that help you accelerate your DevOps journey and how AWS services can help remove the undifferentiated heavy lifting associated with DevOps adaptation We also highlight h ow to build a continuous integration and delivery capability without managing servers or build nodes and h ow to leverage Infrastructure as Code to provision and manage your cloud resources in a consistent and repeatable manner • Continuous Integration : is a software development practice where developers regularly merge their code changes into a central repository after which automated builds and tests are run • Continuous Delivery : is a software development practice where code changes are automatically bui lt tested and prepared for a release to production • Infrastructure as Code : is a practice in which infrastructure is provisioned and managed using code and software development techniques such as version control and continuous integration • Monitoring a nd Logging : enables organizations to see how application and infrastructure performance impacts the experience of their product’s end user • Communication and Collaboration : practices are established to bring the teams closer and by building workflows and d istributing the responsibilities for DevOps • Security : should be a cross cutting concern Your continuous integration and continuous delivery ( CI/CD ) pipelines and related services should be safeguarded and proper access control permissions should be setup An examination of each of these principles reveals a close connection to the offerings available from Amazon Web Services (AWS) Amazon Web Services Introduction to DevOps on AWS 2 Continuous Integration Continuous Integration (CI) is a software development practice where developers regularly merge their code changes into a central code repository after which automated builds and tests are run CI helps find and address bugs quicker improve software qual ity and reduce the time it takes to validate and release new software updates AWS offers the following three services for continuous integration: AWS CodeCommit AWS CodeCommit is a secure highly scalable managed source control service that hosts private git repositories CodeCommit eliminates the need for you to operate your own source control system and there is no hardware to provision and scale or software to install conf igure and operate You can use CodeCommit to store anything from code to binaries and it supports the standard functionality of Git allowing it to work seamlessly with your existing Git based tools Your team can also use CodeCommit’s online code tools to browse edit and collaborate on projects AWS CodeCommit has several benefits: Collaboration AWS CodeCommit is designed for collaborative software development You can easily commit branch and merge your code enabling you to easily maintain control of your team’s projects CodeCommit also supports pull requests which provide a mechanism to request code reviews and discuss code with collaborators Encryption You can transfer your files to and from AWS CodeCommit using HTTPS or SSH as you prefer Your repositories are also automatically encrypted at rest through AWS Key Management Service (AWS KMS) using customer specific keys Access Control AWS CodeCommit uses AWS Identity and Access Management (IAM) to control and monitor who can access your data as well as how when and where they can access it CodeCommit also helps you monitor your repositories through AWS CloudTrail and Amazon CloudWatch High Availability and Durability AWS CodeCommit stores your repositories in Amazon S imple Storage Service (Amazon S 3) and Amazon DynamoDB Your encrypted data is redundantly stored across multiple facilities This architecture increases the availability and durability of your repository data Amazon Web Services Introduction to DevOps on AWS 3 Notifications and Custom Scripts You can now receive notifications for events impacting your repositories Notifications will come in the form of Amazon S imple Notification Service (Amazon S NS) notifications Each notification will include a stat us message as well as a link to the resources whose event generated that notification Additionally using AWS CodeCommit repository triggers you can send notifications and create HTTP webhooks with Amazon SNS or invoke AWS Lambda functions in response to the repository events you choose AWS CodeBuild AWS CodeBuild is a fully managed continuous integration service that compiles source code runs tests and pro duces software packages that are ready to deploy You don’t need to provision manage and scale your own build servers CodeBuild can use either of GitHub GitHub Enterprise BitBucket AWS CodeCommit or Amazon S3 as a source provider CodeBuild scales c ontinuously and can processes multiple builds concurrently CodeBuild offers various pre configured environments for various version of Windows and Linux Customers can also bring their customized build environments as docker containers CodeBuild also int egrates with open source tools such as Jenkins and Spinnaker CodeBuild can also create reports for unit functional or integration tests These reports provide a visual view of how many tests cases were executed and how many passed or failed The build process can also be executed inside a n Amazon Virtual Private Cloud (Amazon VPC) which can be helpful if your integration services or databases are deployed inside a VPC With AWS CodeBuild your build artifacts are encrypted with customer specific keys that are managed by the KMS CodeBuild is int egrated with IAM so you can assign user specific permissions to your build projects AWS Code Artifact AWS CodeArtifact is a fully managed artifact repository service that can be used by organizations se curely store publish and share software packages used in their software development process CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the lates t versions Amazon Web Services Introduction to DevOps on AWS 4 Software development teams are increasingly relying on open source packages to perform common tasks in their application package It has now become critical for the software development teams to maintain control on a particular version of the o pen source software is vulnerability free With CodeArt ifact you can set up controls to enforce the above CodeArtifact works with commonly used package managers and build tools like Maven Gradle npm yarn twine and pip making it easy to integrate int o existing development workflows Continuous Delivery Continuous delivery is a software development practice where code changes are automatically prepared for a release to production A pillar of modern application development continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage When properly implemented developers will always have a deployment ready build artifact that has passed through a sta ndardized test process Continuous delivery lets developers automate testing beyond just unit tests so they can verify application updates across multiple dimensions before deploying to customers These tests may include UI testing load testing integrat ion testing API reliability testing etc This helps developers more thoroughly validate updates and pre emptively discover issues With the cloud it is easy and cost effective to automate the creation and replication of multiple environments for testing which was previously difficult to do onpremises AWS offers the following services for continuous delivery : • AWS CodeBuild • AWS CodeDeploy • AWS CodePipeline AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon E lastic Compute Cloud (Amazon E C2) AWS Fargate AWS Lambda and your on premises servers AWS CodeDeploy makes it easier for you to rapidly release new features he lps you avoid Amazon Web Services Introduction to DevOps on AWS 5 downtime during application deployment and handles the complexity of updating your applications You can use CodeDeploy to automate software deployments eliminating the need for error prone manual operations The service scales to match you r deployment needs CodeDeploy has several benefits that align with the DevOps principle of continuous deployment: Automated Deployments: CodeDeploy fully automates software deployments allowing you to deploy reliably and rapidly Centralized control: CodeDeploy enables you to easily launch and track the status of your application deployments through the AWS Management Console or the AWS CLI CodeDeploy gives you a detailed report enabling you to view when and to where each application revision was deployed You can also create push notifications to receive live updates about your deployments Minimize downtime: CodeDeploy helps maximize your application availability during the software dep loyment process It introduces changes incrementally and tracks application health according to configurable rules Software deployments can easily be stopped and rolled back if there are errors Easy to adopt: CodeDeploy works with any application and pr ovides the same experience across different platforms and languages You can easily reuse your existing setup code CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (eg AWS CodePipeline GitHub Jenkins) AWS CodeDeploy supports multiple deployment options For more information see Deployment Strategies AWS CodePipeline AWS CodePipeline is a continuous delivery service that enables you to model visualize and automate the steps required to release your software With AWS CodePipeline you model the full release process for building your code deploying to preproduction environments testing your appli cation and releasing it to production AWS CodePipeline then builds tests and deploys your application according to the defined workflow every time there is a code change You can integrate partner tools and your own custom tools into any stage of the r elease process to form an end toend continuous delivery solution Amazon Web Services Introduction to DevOps on AWS 6 AWS CodePipeline has several benefits that align with the DevOps principle of continuous deployment: Rapid Delivery: AWS CodePipeline automates your software release process allowing you to rapidly release new features to your users With CodePipeline you can quickly iterate on feedback and get new features to your users faster Improved Quality: By automating your build test and release processes AWS CodePipeline enables you to increa se the speed and quality of your software updates by running all new changes through a consistent set of quality checks Easy to Integrate: AWS CodePipeline can easily be extended to adapt to your specific needs You can use the pre built plugins or your o wn custom plugins in any step of your release process For example you can pull your source code from GitHub use your on premises Jenkins build server run load tests using a third party service or pass on deployment information to your custom operation s dashboard Configurable Workflow: AWS CodePipeline enables you to model the different stages of your software release process using the console interface the AWS CLI AWS CloudFormation or the AWS SDKs You can easily specify the tests to run and customize the steps to deploy your application and its dependencies Deployment Strategies Deployment strategies define how you want to deliver your software Organizations follow different deployment strat egies based on their business model Some may choose to deliver software which is fully tested and other may want their users to provide feedback and let their users evaluate under development features (eg Beta releases) In the following section we wil l talk about various deployment strategies InPlace Deployments In this strategy the deployment is done line with the application on each instance in the deployment group is stopped the latest application revision is installed and the new version of t he application is started and validated You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete In place deployments can be all atonce assuming a service outa ge or done as a rolling update AWS CodeDeploy and AWS Elastic Beanstalk offer deployment configurations for one at a time half at a time and all at Amazon Web Services Introduction to DevOps on AWS 7 once These same deployment strategies for in place deployments are available within BlueGreen deployments Blue Green Deployments BlueGreen sometimes referred to as redblack deployment is a technique for releasing applications by shift traffic between two identical environments running differing versions of the application Bluegreen deployments help you minimize downtime during application updates mitigating risks surrounding downtime and rollback functionality Bluegreen deployments enable you to launch a new version (green) of your application alongside the old version (blue) and monitor and test the new version before you reroute traffic to it rolling back on issue detection Canary Deployments Traffic is shifted in two increments A canary deployment is a blue green strategy tha t is more risk adverse in which a phased approach is used This can be two step or linear in which new application code is deployed and exposed for trial and upon acceptance rolled out either to the rest of the environment or in a linear fashion Linear Deployments Linear deployments mean t raffic is shifted in equal increments with an equal number of minutes between each increment You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment Allatonce Deployments Allatonce deployments means a ll traffic is shifted from the original environment to the replacement environment all at once Deployment Strategies Matrix The following matrix lists the su pported deployment strategies for Amazon Elastic Container Service (Amazon ECS) AWS Lambda and Amazon EC2/On Premise • Amazon ECS is a fully managed orchestration service • AWS Lambda lets you run code without pr ovisioning or managing servers Amazon Web Services Introduction to DevOps on AWS 8 • Amazon EC2 enables you to run secure resizable compute capacity in the cloud A B C D 1 Deployment Strategies Matrix Amazon ECS AWS Lambda Amazon EC2/On Premise 2 InPlace ✓ ✓ ✓ 3 BlueGreen ✓ ✓ ✓* 4 Canary ✓ ✓ ☓ 5 Linear ✓ ✓ ☓ 6 AllatOnce ✓ ✓ ☓ Note: BlueGreen deployment with EC2/On Premise only works with EC2 instances AWS Elastic Beanstalk Deployment Strategies AWS Elastic Beanstalk supports the following type of deployment strategies: • AllatOnce: Performs in place deployment o n all instances • Rolling : Splits the instances into batches and deploys to one batch at a time Amazon Web Services Introduction to DevOps on AWS 9 • Rolling with Additional Batch: Splits the deployments into batches but for the first batch creates new EC2 instances instead of deploying on the existing EC2 instances • Immutable: If you need to deploy with a new instance instead of using an existing instance • Traffic Splitting: Perfor ms immutable deployment and then forwards percentage of traffic to the new instances for a pre determined duration of time If the instances stay healthy then forward all traffic to new instances and terminate old instances Infrastructure as Code A fundam ental principle of DevOps is to treat infrastructure the same way developers treat code Application code has a defined format and syntax If the code is not written according to the rules of the programming language applications cannot be created Code i s stored in a version management or source control system that logs a history of code development changes and bug fixes When code is compiled or built into applications we expect a consistent application to be created and the build is repeatable and r eliable Practicing infrastructure as code means applying the same rigor of application code development to infrastructure provisioning All configurations should be defined in a declarative way and stored in a source control system such as AWS CodeCommit the same as application code Infrastructure provisioning orchestration and deployment should also support the use of the infrastructure as code Infrastructure was traditionally provisioned using a com bination of scripts and manual processes Sometimes these scripts were stored in version control systems or documented step by step in text files or run books Often the person writing the run books is not the same person executing these scripts or followi ng through the run books If these scripts or runbooks are not updated frequently they can potentially become a show stopper in deployments This results in the creation of new environments is not always repeatable reliable or consistent In contrast to the above AWS provides a DevOps focused way of creating and maintaining infrastructure Similar to the way software developers write application code AWS provides services that enable the creation deployment and maintenance of infrastructure in a progr ammatic descriptive and declarative way These services provide rigor clarity and reliability The AWS services discussed in this paper are core Amazon Web Services Introduction to DevOps on AWS 10 to a DevOps methodology and form the underpinnings of numerous higher level AWS DevOps principles and pract ices AWS offers following services to define Infrastructure as a code • AWS CloudFormation • AWS Cloud Development Kit (AWS CDK) • AWS Cloud Development Kit for Kubernetes AWS CloudFormation AWS CloudFormation is a service that enables developers create AWS resources in an orderly and predictable fashion Resources are written in text files using J avaScript Object Notation (JSON) or Yet Another Markup Language (YAML) format The templates require a specific syntax and structure that depends on the types of resources being created and managed You author your resources in JSON or YAML with any code e ditor such as AWS Cloud9 check it into a version control system and then CloudFormation builds the specified services in safe repeatable manner A CloudFormation t emplate is deployed into the AWS environme nt as a stack You can manage stacks through the AWS Management Console AWS Command Line Interface or AWS CloudFormation APIs If you need to make changes to the running resources in a stack you update the stack Before making changes to your resources you can generate a change set which is a summary of your proposed changes Change sets enable you to see how your changes might impact your running resources especially for critical resources before implementing them Amazon Web Services Introduction to DevOps o n AWS 11 Figure 1 AWS CloudFormation cre ating an entire environment (stack) from one template You can use a single template to create and update an entire environment or separate templates to manage multiple layers within an environment This enables templates to be modularized and also provide s a layer of governance that is important to many organizations When you create or update a stack in the console events are displayed showing the status of the configuration If an error occurs by default the stack is rolled back to its previous state Amazon Simple Notification Service (Amazon SNS) provides notifications on events For example you can use Amazon SNS to track stack creation and deletion progress via email and integrate with other processes programmatically AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources and lets you describe any dependencies or pass in special parameters when the stack is configured With CloudFormation templates you can work with a broad set of AWS service s such as Amazon S3 Auto Scaling Amazon CloudFront Amazon DynamoDB Amazon EC2 Amazon ElastiCache AWS Elastic Beanstalk Elastic Load Balancing IAM AWS OpsWorks and Amazon VPC For the most recent list of supported resources see AWS resource and property types reference Amazon Web Services Introduction to DevOps on AWS 12 AWS Cloud Development Kit The AWS Cloud Development Kit (AWS CDK) is an open source software development framework to model and provision your cloud application resources using familiar programming languages AWS CDK enables you to model application infrastructure using TypeScript Python Java and NET Developers can leve rage their existing Integrated Development Environment (IDE) leveraging tools like autocomplete and in line documentation to accelerate development of infrastructure AWS CDK utilizes AWS CloudFormation in the background to provision resources in a safe repeatable manner Constructs are the basic building blocks of CDK code A construct represents a cloud component and encapsulates everything AWS CloudFormation needs to create the component The AWS CDK includes the AWS Construct Library containing constructs representing many AWS services By combining constructs together you can quickly and easily create complex architectures for deployment in AWS AWS Cloud Development Kit for Kubernetes AWS Cloud Development Kit for Kubernetes (cdk8s) is an open source software development framework for defining Kubernetes applications u sing general purpose programming languages Once you have defined your application in a programming language ( As of date of publication only Python and TypeScript are supported) cdk8s will convert your application description in to preKubernetes YML This YML file can then be consumed by any Kubernetes cluster running anywhere Because the structure is defined in a programming language you can use the rich features provided by the programming language You can use the abstraction feature of the programming language to create your own boiler plate code and re use it across all of the deployments Automation Another core philosophy and practice of DevOps is automation Automation focuses on the setup configuration deployment and support of infrastructure a nd the applications that run on it By using automation you can set up environments more rapidly in a standardized and repeatable manner The removal of manual processes is a key to a successful DevOps strategy Historically server configuration and appl ication Amazon Web Services Introduction to DevOps on AWS 13 deployment have been predominantly a manual process Environments become nonstandard and reproducing an environment when issues arise is difficult The use of automation is critical to realizing the full benefits of the cloud Internally AWS relies heavily on automation to provide the core features of elasticity and scalability Manual processes are error prone unreliable and inadequate to support an agile business Frequently an organization may tie up highly skilled resources to provide manual configuration when t ime could be better spent supporting other more critical and higher value activities within the business Modern operating environments commonly rely on full automation to eliminate manual intervention or access to production environ ments This includes all software releasing machine configuration operating system patching troubleshooting or bug fixing Many levels of automation practices can be used together to provide a higher level end toend automated process Automation has the following key benefits: • Rapid changes • Improved productivity • Repeatable configurations • Reproducible environments • Leveraged elasticity • Leveraged auto scaling • Automated testing Automation is a cornerstone with AWS services and is internally supported in al l services features and offerings AWS OpsWorks AWS OpsWorks take the principles of DevOps even further than AWS Elastic Beanstalk It can be considered an application management service rather than simply an application container AWS OpsWorks provides even more levels of automation with additional features like i ntegration with configuration management software (Chef) and application lifecycle management You can use application lifecycle management to define when resources are set up configured deployed un deployed or terminated Amazon Web Services Introduction to DevOps on AWS 14 For added flexibility AWS Ops Works has you define your application in configurable stacks You can also select predefined application stacks Application stacks contain all the provisioning for AWS resources that your application requires including application servers web servers d atabases and load balancers Figure 2 AWS OpsWorks showing DevOps features and architecture Application stacks are organized into architectural layers so that stacks can be maintained independently Example layers could include web tier application t ier and database tier Out of the box AWS OpsWorks also simplifies setting up Auto Scaling groups and Elastic Load Balancing load balancers further illustrating the DevOps principle of automation Just like AWS Elastic Beanstalk AWS OpsWorks supports application versioning continuous deployment and infrastructure configuration management AWS OpsWorks also supports the DevOps practices of monitoring and logging (covered in the next section) Monitoring support is provided by Amazon CloudWatch All lif ecycle events are logged and a separate Chef log documents any Chef recipes that are run along with any exceptions AWS Elastic Beanstalk AWS Elastic Beanstalk is a service to rapidly deploy and sc ale web applications developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and IIS Amazon Web Services Introduction to DevOps on AWS 15 Elastic Beanstalk is an abstraction on top of A mazon EC2 Auto Scaling and simplifies the deployment by giving additional features such as cloning bluegreen deployments Elastic Beanstalk Command Line Interface (eb cli) and integration with AWS Toolkit for Visual Studio Visual Studio Code Eclipse and IntelliJ for increa se developer productivity Monitoring and Logging Communication and collaboration are fundamental in a DevOps philosophy To facilitate this feedback is critical In AWS feedback is provided by two core services: Amazon CloudWatch and AWS CloudTrail Tog ether they provide a robust monitoring alerting and auditing infrastructure so developers and operations teams can work together closely and transparently AWS provides the following services for monitoring and logging: Amazon CloudWatch Metrics Amazon CloudWatch metrics automatically collect data from AWS services such as Amazon EC2 instances Amazon EBS volumes and Amazon RDS DB instances These metrics can then be organized as dashboards and alarms or events can be created to trigger events or perform Auto Scaling actions Amazon CloudWatch Alarms You can setup alarms based on the metrics collected by Amazon CloudWatch Metrics The alarm can then send a notification to Amazon Simple Notification Service ( Amazon SNS) topic or initiate Auto Scaling actions An alarm requires period (length of the time to evaluate a metric) Evaluation Period (number of the most recent data points) and Datapoints to Alarm (number of data points within the Evaluation Period) Amazon CloudWatch Logs Amazon CloudWatch Logs is a log aggregation and monitoring service AWS CodeBuild CodeCommit CodeDeploy and CodePipeline provide integrations with CloudWatch logs so that all of the logs can be centrally monitored In addition the previously mentioned services various other AWS services provide direct integration with CloudWatch With CloudWatch Logs you can: Amazon Web Services Introduction to DevOps on AWS 16 • Query Your Log Data • Monitor Logs from Amazon EC2 Instan ces • Monitor AWS CloudTrail Logged Events • Define Log Retention Policy Amazon CloudWatch Logs Insights Amazon CloudWatch Logs Insights scans your logs and enables you to perform interactive queries and visualizations It understands various log formats and a uto discovers fields from JSON Logs Amazon CloudWatch Events Amazon CloudWatch Events delivers a near real time stream of system events that describe changes in AWS resources Using simple rules that you can quickly set up; you can match events and rout e them to one or more target functions or streams CloudWatch Events becomes aware of operational changes as they occur CloudWatch Events responds to these operational changes and takes corrective action as necessary by sending messages to respond to the environment activating functions making changes and capturing state information You can configure rules in Amazon CloudWatch Events to alert you to changes in AWS services and integrate these events with other 3rd party systems using Amazon EventBridg e The following are the AWS DevOps related services that have integration with CloudWatch Events • Application Auto Scaling Events • CodeBuild Events • CodeCommit Events • CodeDeploy Events • CodePipeline Events Amazon EventBridge Amazon CloudWatch Events and EventBridge are the same underlying service and API however EventBridge provides more features Amazon Web Services Introduction to DevOps on AWS 17 Amazon EventBridge is a serverless event bus that enables integrations betwe en AWS services Software as a services (SaaS) and your applications In addition to build event driven applications EventBridge can be used to notify about the events from the services such as CodeBuild CodeDeploy CodePipeline and CodeCommit AWS CloudTrail In order to embrace the DevOps principles of collaboration communication and transparency it’s important to understand who is making modifications to your infrastructure In AWS this transparency is provided by AWS CloudTrail service All AWS interactions are handled through AWS API calls that are monitored and logged by AWS CloudTrail All generated log files are stored in an Amazon S3 bucket that you define Log files are e ncrypted using Amazon S3 server side encryption (SSE) All API calls are logged whether they come directly from a user or on behalf of a user by an AWS service Numerous groups can benefit from CloudTrail logs including operations teams for support security teams for governance and finance teams for billing Amazon Web Services Introduction to DevOps on AWS 18 Communication and Collaboration Whether you are adopting DevOps Culture in your organization or going t hrough a DevOps cultural transformation communication and collaboration is an important part of you approach At Amazon we have realized that there is need to bring a changed in the mindset of the teams and hence adopted the concept of TwoPizza Teams TwoPizza Teams "We try to create teams that are no larger than can be fed by two pizzas" said Bezos "We call that the two pizza team rule" The smaller the team the better the collaboration Collaboration is also very important as the software releases are moving faster than ever And a team’s ability to deliver the software can be a differentiating factor for your organization against your competition Image a situation in which a new product feature needs to be released or a bug needs to be fixed you w ant this to happen as quickly as possible so you can have a smaller gotomarket timed This is also important as you don’t want the transformation to be a slowmoving process rather than an agile approach where waves of changes start to make an impact Communication between the teams is also important we move towards the shared responsibility model and start moving out of the siloed development approach This bring the concept of ownership in the team and shifts their perspective to look at this as an endtoend Your team should not think about your production environments as black boxes where they have no visibility Cultural transformation is also important as you may be building a common DevOps team or the other approach is that you have a DevOps focus ed member(s) in your team Both of these approaches do introduce Shared Responsibility in to the team Amazon Web Services Introduction to DevOps on AWS 19 Security Whether you are going through a DevOps Transformation or implementing DevOps principles for the first time you should think about Security as integrated in your DevOps processes This should be cross cutting concern across your build test deployment stages Before we talk about Security in DevOps on AWS let’s look at the AWS Shared Responsibility Model AWS Shared Responsibility Model Securi ty is a shared responsibility between AWS and the customer The different parts of the Shared Responsibility Model are explained below: • AWS responsibility “Security of the Cloud” AWS is responsible for protecting the infrastructure that runs all of the s ervices offered in the AWS Cloud This infrastructure is composed of the hardware software networking and facilities that run AWS Cloud services • Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Clo ud services that a customer selects This determines the amount of configuration work the customer must perform as part of their security responsibilities This shared model can help relieve the customer’s operational burden as AWS operates manages and co ntrols the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates This is critical in the cases where customer want to understand the security of their build environ ments Amazon Web Services Introduction t o DevOps on AWS 20 Figure 3 AWS Shared Responsibility Model For DevOps we want to assign permissions based on the least privilege permissions model This model states that “A user (or service) should be granted minimal amount of permiss ions that are required to get job done” Permissions are maintained in IAM IAM is a web service that helps you securely control access to AWS resources You can use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources Identity Access Management AWS Identity and Access Management (IAM) defines the controls and polices that are used to manage access to AWS Resources Using IAM you can create users and groups and define permissions to various DevOps services In addition to the users various services may also need access to AWS resources eg your CodeBuild project may need access to store Docker images in Amazon Elastic Container Registry (Amazon ECR) and will need permissions to write to ECR These types of permissions are defined by a special type role know as service role IAM is one component of the AWS security infrastructure With IAM you can centrally manage groups users service roles and security credentials such as passwords access keys and permissions policies that control which AWS services and resources users c an access IAM Policy lets you define the set of permissions This policy can then be attached to either a Role User or a Service to define their permission You can also use IAM to create roles that are used widely within your desired DevOps strategy In some case it can make perfect sent to programmatically AssumeRole instead directly getting the permissions When a service or user assumes roles they are given temporary credentials to access a service that you normally don’t have access Amazon Web Services Introduction to DevOps on AWS 21 Conclusion In order to make the journey to the cloud smooth efficient and effective ; technology companies should embrace DevOps principles and practices These principles are embedded in the AWS platform and form the cornerstone of numerous AWS services especially those in the deployment and monitoring offerings Begin b y defining your infrastructure as code using the service AWS CloudFormation or AWS Cloud Development Kit (CDK) Next define the way in which your applications are going to use continuous deployment with the help of services like AWS CodeBuild AWS CodeDep loy AWS CodePipeline and AWS CodeCommit At the application level use containers like AWS Elastic Beanstalk AWS Elastic Container Service ( Amazon ECS) or AWS Elastic Kubernetes Service ( Amazon EKS) and AWS OpsWorks to simplify the configuration of co mmon architectures Using these services also makes it easy to include other important services like Auto Scaling and Elastic Load Balancing Finally use the DevOps strategy of monitoring such as Amazon CloudWatc h and solid security practices such as AWS IAM With AWS as your partner your DevOps principles will bring agility to your business and IT organization and accelerate your journey to the cloud Contributors Contributors to this document include : • Muhammad Mansoor Solutions Architect • Ajit Zadgao nkar World Wide Tech Leader Modernization • Juan Lamadrid Solutions Architect • Darren Ball Solutions Architect • Rajeswari Malladi Solutions Architect • Pallavi Nargund Solutions Architect • Bert Zahniser Solutions Architect • Abdullahi Olaoye – Cloud Sol utions Architect • Mohamed Kiswani – Software Development Manager • Tara McCann – Manager Solutions Architect Amazon Web Services Introduction to DevOps on AWS 22 Document Revisions Date Description October 2020 Updated sections to include new services December 2014 First publication
|
General
|
consultant
|
Best Practices
|
Introduction_to_Scalable_Gaming_Patterns_on_AWS
|
Introduction to Scal able Game Develo pment Patterns on AWS Second Edition Published December 2019 Updated March 11 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Getting started 1 Game design decisions 1 Game client considerations 3 Launchin g an initial game backend 4 High availability scalability and security 8 Binary game data with Amazon S3 9 Expanding beyond AWS Elastic Beanstalk 10 Reference architecture 11 Games as REST APIs 13 HTTP load balancing 14 HTTP automatic scaling 18 Game servers 20 Matchmaking 21 Routing messages with Amazon SNS 22 Last thoughts on game servers 23 Relational vs NoSQL databases 24 MySQL 24 Amazon Aurora 27 Redis 28 MongoDB 28 Amazon DynamoDB 29 Other NoSQL options 32 Caching 32 Binary game content with Amazon S3 35 Content delivery and Amazon CloudFront 36 Uploading content to Amazon S3 37 Amazon S3 performance considerations 42 Loosely coupled architectures with asynchronous jobs 44 Leaderboards and avatars 44 Amazon SQS 45 Other queue options 47 Cost of the cloud 47 Conclusion and n ext steps 48 Contributors 49 Further reading 49 Document revisions 50 Introduction Whether you’re an up andcoming mobile developer or an established AAA game studio you understand the challenges involved with launching a successful game in the current games landscape Not only must the game be compelling but users also expect a wide range of online features such as friend lists leaderboards weekly challenges various multiplayer modes and ongoing content releases To successfully execute a game lau nch it’s critical to get favorable app store ratings and reviews on popular e retail channels to provide sales and awareness momentum for your game —like the first weekend of a movie release To deliver these features you need a server backend The server backend can consist of both the actual game servers for multiplayer games or servers that power the game services such as chat matchmaking and so on The server backend must be able to scale up at a moment’s notice in the event that the game goes viral and suddenly explodes from 100 to 100000 users At the same time the backend must be cost effective so that you don’t overpay for unused server capacity Amazon Web Services (AWS) is a flexible cost effective easy touse cloud service By running you r game on AWS you can leverage capacity on demand to scale up and down with your users rather than having to guess at your server demands and potentially over purchase or under purchase hardware Many indie mobile and AAA developers have recognized the advantages of AWS and are having success running their games on the AWS Cloud This book is broken into sections covering the different features of modern games such as friend lists leaderboards game servers messaging and user generated content You can start small and just use the AWS components and services you need As your game evolves and grows you can revisit this book and evaluate additional AWS features Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 1 Getting started If you are just getting started developing your game it can be challenging to figure out where to begin with your backend server development Thankfully AWS can help you get started quickly because you don’t have to make a decision about every service that you’re going to use up front As you iterate on your game you can add AWS services over time This approach enables you to develop additional game features or backend functionality without having to plan for everything at the beginning We encourage you to start based on the game features that you need and then add more AWS features as your game evolves In this section we’ll look at some common game features that determine which types of services you’ll need Game design decisions Modern social mobile and AAA games tend to share the following c ommon tenets that affect server architecture: • Pick up and play anywhere – Players expect their saved games profiles and other data to be stored online to allow the easily move from device to device This operation typically involve s synchronizing and merging local data as you move from one device to another so a simple data s torage solution is not always the right solution • Leaderboards and rankings – Players continue to look for a competitive experience similar to classic arcade games Increasingly though the focus is on friends’ leaderboards rather than just a single glob al high score list This requires a more sophisticated leaderboard that can sort in multiple dimensions while maintaining good performance • Free toplay – One of the biggest shifts over the past few years has been the widespread move to free toplay In t his model games are free to download and play and the game earns money through in app purchases for items such as weapons outfits power ups and boost points as well as advertising The game is funded by a small minority of users that purchase these i tems with the vast majority of users playing for free This means that your game backend must be as cost effective as possible and must be able to scale up and down as needed Even for premiere AAA games large r percentages of revenue are now coming from content updates and in game purchases Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 2 • Analytics – Maximizing long tail revenue requires that games collect and analyze a large number of metrics regarding gameplay patterns favorite items purchase preferences and so forth Ensuring that new game featu res target those areas of the game where users are spending their time and money is a critical factor in the success of in game purchases • Content updates – Games that achieve the highest player retention tend to have a continuous release cycle of new item s levels challenges and achievements The continuing trend of games becoming more of a service that a single product reinforces the need for constant post launch changes These features require frequent updates with new data and game assets By using a content delivery network (CDN) to distribute game content you can cut costs and increase download speed • Asynchronous gameplay – Although larger games generally include a real time online multiplayer mode games of all kinds are realizing the importance of asynchronous features to keep players engaged Examples of asynchronous play include competing against your friends based on points unlocks badges or similar achievements This type of game play gives players the feel of a connected game experience even if they aren’t online all the time or if they are using slower networks like 3G or 4G for mobile games • Push notifications – A common method of getting users to come back to the game is to send targeted push notifications to their mobile device For example a user might get a notification that their friend beat their score or that a new challenge or level is available This d raws the user back into the core game experience even when they’re not directly playing • Unpredictable clients – Modern games run on a wide variety of platforms including mobile devices consoles PCs and browsers One user could be roaming on their port able device playing against a console user on Wi Fi and both would expect a consistent experience For this reason it’s necessary to leverage stateless protocols (for example HTTP) and asynchronous calls as much as possible Each of these game features has an impact on your server features and technology For example if you have a simple Top 10 leaderboard you may be able to store it in a single MySQL or Amazon Aurora database table However if you have complex leaderboards with multiple sort dimensi ons it may be necessary to use a NoSQL option such as Amazon Elasti Cache or Amazon DynamoDB (discussed later in this book) Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 3 Game client considerations Although the focus of this book is on the architecture you can deploy on AWS the implementation of your game client can also have an impact on your game’s scalability It also affects how much your game backend costs to run because frequent network requests from the client use more bandwidth and require more server resources Here are a few important guidel ines to follow: • All network calls should be asynchronous and non blocking This means that when a network request is initiated the game client continues on without waiting for a response from the server When the server responds this triggers an event on the client which is handled by a callback of some kind in the client code On iOS AFNetworking is one popular approach Browser games should use a call such as jQueryajax() or the equivalent and C++ clients should consider libcurl std::async or similar libraries Similarly popular game engines usually include an asynchronous method for network and we b requests For example Unity offers UnityWebRequest and Unreal Engine has HttpRequest • Use JSON to transport data It’s compact cross platform fast to parse has lots of library support and contains data type information If you have large payloads simply gzip them because the majority of web servers and mobile clients have native support for gzip Don’t waste time over optimizing —any payload in the range of hundreds of kilobytes should be adequate We have also seen developers use Apache Avro and MessagePack depending on their use case comfort level with the formats and availability of libraries Note : An exception to this rule is multiplayer gameplay packets whic h are typically UDP • Use HTTP/11 with Keepalives and reuse HTTP connections between requests This minimizes the overhead your game incurs when making network requests Each time you have to open a new HTTP socket this requires a three way TCP handshak e which can add upwards of 50 milliseconds (ms) In addition repeatedly opening and closing TCP connections will accumulate large numbers of sockets in the TIME_WAIT state on your server which consumes valuable server resources • Always POST any importan t data from the client to the server over SSL This includes login stats save data unlocks and purchases The same applies for any GET PUT and DELETE requests because modern computers are efficient at handling SSL and the overhead is low AWS enables you to have our Elastic Load Balancer handle the SSL workload which completely offloads it from your servers Amazon Web Services Introduction to Scal able Game Development Patterns on AWS 4 • Never store security critical data such as AWS access keys or other tokens on the client device either as part of your game data or user data Access key IDs and secret access keys allow the possessors of those keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI ) AWS Tools for Windows PowerShell the AWS SDKs or direct HTTP calls using the APIs for individual AWS services If somebody roots or jailbreaks their device you risk the possibility that they could gain access to your server code user data and even your AWS billing account In the case of PC games your keys likely exist in memory when the game client is running and pulling them out isn’t that hard for someone with the know how You have to assume anything you store on a game client will be compromi sed If you want your game client to directly access AWS services consider using Amazon Cognito Federated Identities which allows your application to obtain te mporary limited privilege credentials • As a precaution you should never trust what a game client sends you It’s an untrusted source and you should always validate what you receive Sometimes it’s malicious traffic (SQL Injection XSS etc) but sometim es it can be something as trivial as someone having their device clock set to a time that’s in the past Many of these concerns are not specific to AWS and are typical client/server safety issues but keeping them in mind will help you design a game that performs well and is reasonably secure Launching an initial game backend With the previous game features and client considerations in mind let’s look at a strategy for getting an initial game backend up and running on AWS as quickly as possible We’ll ma ke use of a few key AWS services with the ability to add more as the game evolves To ensure we’re able to scale out as our game grows in popularity we’ll leverage stateless protocols as much as possible Creating an HTTP/JSON API for the bulk of our gam e features allows us to add instances dynamically and easily recover from transient network issues Our game backend consists of a server that talks HTTP/JSON stores data in MySQL and uses Amazon Simple Storage Service (Amazon S3) for binary content Thi s type of backend is easy to develop and can scale effectively Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 5 A common pattern for game developers is to run a web server locally on a laptop or desktop for development and then push the server code to the cloud when it’s time to deploy If you follow this pattern AWS Elastic Beanstalk can greatly simplify the process of deploying your code to AWS Figure 1: A high level overview of your first game backend running on AWS Elastic Beanstalk is a deployment management service that sits on top of other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing and Amazon Relational Database Services (Amazon RDS) Amazon EC2 is a web service that provides secure resizable compute capacity in the cloud It is designed to make at scale cloud computing easier for developers The Amazon EC2 simple web service interface allows you to obtain and configure computing capacity with minimal fri ction It reduces the time required to obtain and boot new server instances to minutes which allows you to quickly scale capacity (up or down) as your computing requirements change Elastic Load Balancing automatically distributes incoming application tra ffic across multiple Amazon EC2 instances It enables you to achieve fault tolerance in your applications Elastic Load Balancing offers three types of load balancers that feature high availability automatic scaling and robust security These are the App lication Load Balancer that routes traffic based on advanced application level information that Amazon Web Services Introduction to Scalable Game Development Patter ns on AWS 6 includes the content of the request and is most suited to HTTP and HTTPS traffic the Network Load Balancer that is best suited for TCP UPD and TLS traffic an d the Classic Load Balancer that works with the EC2 classic network The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances The Application Load Balancer is ideal for applications that need advanced routing capabilities microservices and container based architectures The Network Load Balancer would be ideal for routing messages to persistent game servers chat services and other stateful servers Amazon RDS makes it easy to set up operate and scale a rela tional database in the cloud It provides cost efficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups Amazon RDS supports many familiar database engines i ncluding Amazon Aurora PostgreSQL MySQL and more You can push a zip war or git repository of server code to Elastic Beanstalk Elastic Beanstalk takes care of launching EC2 server instances attaching a load balancer setting up Amazon CloudWatch monitoring alerts and deploying your application to the cloud In short Elastic Beanstalk can set up most of the architecture shown in Figure 1 automatically To see Elastic Beanstalk in action log in to the AWS Management Console and follow the Getting Started Using Elastic Beanstalk tutorial to create a new environment with the programming language of your choice This will launch the sample application and boot a default configuration You can use this environment t o get a feel for the Elastic Beanstalk control panel how to update code and how to modify environment settings If you’re new to AWS you can use the AWS Free Tier to set up these sample environments Note: The sample production environment described in this book will incur costs because it includes AWS resources that aren’t covered under the free tier With the sample application up let’s create a new Elastic Beanstalk application for our game and two new environments one for development and one for production We’ll customize these a bit for our game Use the following table to determine which settings to change depending on the environment type For detailed instructions see Managing and Configuring AWS Elastic Beanstalk Applications and then follow the instructions for Creating an AWS Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 7 Note: In the following table r eplace My Game and mygame values with the name of your game Table 1: Configuration settings for gaming environments Configuration Setting Development Value Production Value Application Name My Game My Game Environment Name mygame dev mygame prod Instance Type t2micro M5large Create RDS DB instance? Yes Yes DB Engine Mysql ** Not recommended Instance Class dbt2micro N/A Allocated Storage 5 GB N/A By using two environments you can enable a simple and effective workflow As you integrate new game backend features you push your updated code to the development environment This triggers Elastic Beanstalk to restart the environment and create a new version In your game client code create two configurations one that points to development and one that points to production Use the development configuration to test your game and then use the production profile when you want to create a new game version to publish to the appropriate app stores When your new game client is ready for release choose the correct server code version from the development environment and deploy it to t he production environment By default deployments incur a brief period of downtime while your app is being updated and restarted To avoid downtime for production deployments you can follow a pattern known as swapping URLs or blue/green deployment In th is pattern you deploy to a standby production environment and then update DNS to point to the new environment For more details on this approach see Blue/Green Deployments with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide Important: We don’t recommend that you use Elastic Beanstalk to manage your database in a production environment because this ties the lifecycle of the database instance (DB instance) to the lifecycle of your application’s environment Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 8 Instead we recommend that yo u run a DB instance in Amazon Aurora and configure your application to connect to it on launch You can also store connection information in Amazon S3 and configure Elastic Beanstalk to retrieve that information during deployment with ebextensions You ca n add AWS Elastic Beanstalk configuration files (ebextensions) to your web application's source code to configure your environment and customize the AWS resources that it contains Configuration files are YAML formatted documents with a config file exten sion that you place in a folder named ebextensions and deploy in your application source bundle For more information see Advanced Environment Customization with Configuration Files (ebextensions) in the AWS Elastic Beanstalk Developer Guide High availability scalability and security For the production environment you need to ensure that your game backend is deployed in a fault tolerant manner Amazon EC2 is hosted in multiple AWS Regions worldwide You should choose a Region that is near the bulk of your game’s customers This ensure s that your users have a low latency experience with your game For more information and a list of the latest AWS Regions see the AWS Global Infrastructure webpage Within each Region are multiple isolated locations known as Availability Zones which you can think of as logical data centers Each of the Availability Zones within a given Region is isolated physically yet connected via high speed networking so they can be used together Balancing your servers across two or more Availability Zones within a Region is a simple way to increase your game’s high availability Using two Availability Zones is a good balance of reliability and cost for most games since you can pair your server instances database instances and cache instances together Elastic Beanstalk can automatically deploy across multiple Availability Zones for you To use multiple Availability Zones with Elastic Beanstalk see Auto Scaling Group for Your Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide For additional scalability you can use automatic scaling to add and remove instances from these Availability Zones For best results consider modifying the automatic scaling trigger to specify a metric (such as CPU usage) and threshold based on your application’s performance profile If the threshold you specify is hit Elastic Beanstalk automatic ally launches additional instances This is covered in more detail in the HTTP Automatic Scaling section of this book Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 9 For development and test environments a single Availability Zone is usually adequate so you can ke ep costs low —assuming you can tolerate a bit of downtime in the event of a failure However if your development environment is actually used by QA testers to validate builds late at night you probably want to treat this more like a production environment In that case leverage multiple Availability Zones like you would in production Finally set up the load balancer to handle SSL termination so that SSL encryption and decryption is offloaded from your game backend servers This is covered in Configuring HTTPS for Your Elastic Beanstalk Environment in the AWS Elastic Beanstalk Developer Guide For security reasons we strongly recommend that you use SSL for your game backend For more Elastic Load Balancing tips see the HTTP Load Balancing section of this book Binary game data with Amazon S3 Next you'll need to create an S3 bucket for each Elastic Beanstalk server environ ment that you created previously This S3 bucket stores your binary game content such as patches levels and assets Amazon S3 uses an HTTP based API for uploading and downloading data which means that your game client can use the same HTTP library for talking to your game servers that’s used to download game assets With Amazon S3 you pay for the amount of data you store and the bandwidth for clients to download it For more information see Amazon S3 Pricing To get started create an S3 bucket in the same Region as your servers For example if you deployed Elastic Beanstalk to the us west2 (Oregon) Region choose this same Region for Amazon S3 For simplicity and because S3 requires bucket names to be unique across all S3 use a similar naming convention for the bucket that you used for your Elastic Beanstalk environment (for example mygame dev or mygame prod) along with other unique identification like commycompanymygame dev For step bystep directions see Create a Bucket in the Amazon S imple Storage Service Getting Started Guide Remember to create a separate S3 bucket for each of your Elastic Beanstalk environments (that is development production etc) By default S3 buckets are private and require that users authenticate to download content for security For game content you have two options You could make the bucket public which means that anyone with the bucket name can download your game content but this is not recommended However a better way to manage authentication is to use signed URLs which is a feature that enables you to pass Amazon S3 credentials as part of the URL In thi s scheme your game server code redirects users to Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 10 an Amazon S3 signed URL which you can set to expire after a period of time For instructions on how to create a signed URL see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference If you are using one of the official AWS SDKs with your game server there is also a good chance that the SDK has builtin methods for generating a presigned URL A pre signed URL gives you access to the object identified in the URL provided that the creator of the pre signed URL has permissions to access that object Generating a pre signed URL is a completely offline operation (no API calls are involved) making it a very fast operation Finally as your game grows you can use Amazon CloudFront a content delivery network (CDN) to provide better performance and save you money on data transfer costs For more information see What is Amazon CloudFront in the Amazon CloudFront Developer Guide Expanding beyond AWS Elastic Beanstalk As your game increases in popularity your core game backend must scale and respond to demand over a period of time By using HTTP for the bulk of your calls you are able to easily scale up and down in response to changing usage patt erns Storing binary data in Amazon S3 saves you money compared to serving files from Amazon EC2 and Amazon S3 also takes care of data availability and durability for you Amazon RDS provides you with a managed MySQL database that you can grow over time w ith Amazon RDS features such as read replicas If your game needs additional functionality you can easily expand beyond Elastic Beanstalk to other AWS services without having to start over Elastic Beanstalk supports configuring other AWS services via t he Elastic Beanstalk Environment Resources For example you can add a caching tier using Amazon ElastiCache which is a managed cache service that supports b oth Memcached and Redis For details about adding an ElastiCache cluster see the Example: ElastiCache in the AWS Elastic Beanstalk Deve loper Guide Of course you can always just launch other AWS services yourself and then configure your app to use them For example you could choose to augment or even replace your RDS MySQL DB instance with Amazon Aurora Serverless an on demand auto matic scaling SQL database or Amazon DynamoDB the AWS managed NoSQL offering Even though we’re using Elastic Beanstalk to get started you still have access to all other AWS services as your game grows Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 11 Reference architecture With our core game backend up and running the next step is to examine the other AWS services that could be useful for our game Before continuing let’s look at the following reference architecture for a horizontally scalable game backend This diagram depicts a game backend that supp orts a wide set of game features including login leaderboards challenges chat binary game data user generated content analytics and online multiplayer Not all games have all these components but this diagram provides a good visualization of how t hey would all fit together In the remaining sections of this book we’ll cover each component in detail Figure 2: A fully production ready game backend running on AWS Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 12 Figure 2 may seem overwhelming at first but it's really just an evolution of the initial game backend we launched using Elastic Beanstalk The following table explains numbered areas of the diagram Table 2: Reference architecture callouts Callout Description 1 The diagram shows two Availability Zones set up with identical functionality for redundancy Not all components are shown in both Availability Zones due to space constraints but both Availability Zones function equivalently These Availability Zones can be the same as the two Availability Zones you initially chose using Elastic Beanstalk 2 The HTTP/JSON servers and master/slave DBs can be the same ones you launched using Elastic Beanstalk You continue to build out as much of your game functionality in the HTTP/JSON layer as possible You can use HTTP automatic scaling to add and remove EC2 HTTP instances automatically in response to user demand For more information see the HTTP Automatic Scaling section of this book 3 You can use the same S3 bucket that you initially created for binary data Amazon S3 is built to be highly scalable and needs little tuning over time As your game assets and user traffic continue s to expand you can add Amaz on CloudFront in fro nt of S3 to boost download performance and save costs 4 If your game has features requiring stateful sockets such as chat or multiplayer gameplay these features are typically handled by game servers running code just for those features These servers run on EC2 instances separate from your HTTP instances For more information see the Game Servers section of this book 5 As your game grows and your database load increases the next step is to add caching typically by using Amazon ElastiCache which is the AWS managed caching service Caching frequently accessed items in ElastiCache offloads read queries from your database This is covered in the Caching section of this book 6 The next step is to look at moving some of your server tasks to asynchronous jobs and using Amazon Simple Queue Service (Amazon SQS) to coordinate this work This allows for a loosely coupled architecture where two or more components exist and each has little or no knowledge of other participating components but they interoperate to achieve a specific purpose Amazon SQS eliminates dependencies on the other components in a loosely coupled system For example if your game allows users to upload and share assets such as photos or custom characters you should execute time intensive tasks such as image resizing in a background job This result s in quicker response times for your game while also decreasing the load on your HTTP server instances These strategies are discussed in the Loosely Coupled Architectures with Asynchronous Jobs section of this book Amazon Web Services Introduction to Sc alable Game Development Patterns on AWS 13 Callout Description 7 As your database load continues to grow you can add Amazo n RDS read replicas to help you scale out your database reads even further This also helps reduce the load on your main database because you can read from the replica and you only access the master database to write This is covered in the Relational vs NoSQL Databases section of this book 8 (Not Shown) At some point you may decide to introduce a NoSQL service such as Amazon DynamoDB to supplement your main database for functionality such as leaderboards or to take advantage of NoSQL features such as atomic counters We discuss these options in the Relational vs NoSQL Databases section 9 If your game includes push notifications you can use Amazon Simple Notification Service (Amazon SNS) and its support for Mobile Push to simplify the process of sending push messages across multiple mobile platforms Your EC2 instances can also receive Amazon SNS messages wh ich enables you to do things like broadcast messages to all players currently connected to your game servers If you look at a single Availability Zone in Figure 2 and compare it to the core game backend we launched with Elastic Beanstalk you can see how scaling your game builds on the initial backend pieces by adding caching database replicas and background jobs With this in mind let’s look at each component Games as REST APIs As mentioned earlier to make use of horizontal scalability you should i mplement most of your game’s features using an HTTP/JSON API which typically follows the REST architectural pattern Game clients whether on mobile devices tablets PCs or consoles make HTTP requests to your servers for data such as login sessions f riends leaderboards and trophies Clients do not maintain long lived connections to the server which makes it easy to scale horizontally by adding HTTP server instances Clients can recover from network issues by simply retrying the HTTP request When p roperly designed a REST API can scale to hundreds of thousands of concurrent players This is the pattern we followed in the previous Elastic Beanstalk example RESTful servers are straightforward to deploy on AWS and they benefit from the wide variety o f HTTP development debugging and analysis tools that are available on AWS Nevertheless some modes of gameplay benefit from a stateful two way socket that can receive server initiated messages Examples include real time online multiplayer chat or gam e invites If your game doesn’t have these features you can implement all of Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 14 your functionality using a REST API We’ll discuss stateful servers later in this book but first let’s focus on our REST layer Deploying a REST layer to Amazon EC2 typically co nsists of an HTTP server such as Nginx or Apache plus a language specific application server The following table lists some of the popular packages that game developers use to build REST APIs Table 3: Packages to build REST APIs Language Package Nodejs Express Restify Sails Python Eve Flask Bottle Java Spring Jersey Go Gorilla Mux Gin PHP Slim Silex Ruby Rails Sinatra Grape This is just a sampling –you can build a REST API in any web friendly programming language Since Amazon EC2 gives you complete root access to the instance you can deploy any of these packages For Elastic Beanstalk there are some restrictions on supporte d packages For details see the Elastic Beanstalk FAQs RESTful servers benefit from medium sized instances since this enables more to be deployed horizontally at the same price point Medium sized instances from the general purpose instance family (for example M5) or compute optimized instance family (for example C5) are a good match for REST servers HTTP load balancing Load balancing RESTful servers is very straightforward because HTTP con nections are stateless AWS offers Elastic Load Balancing which is the easiest approach to HTTP load balancing for games on Amazon EC2 You may recall from our example game backend that Elastic Beanstalk automatically deploys an Elastic Load Balancing load balancer to load balance your EC2 instances for you I f you use Elastic Beanstalk to get started you will already have an Elastic Load Balancing load balancer running Amazon Web Services Introduction to Scalable Game Development Patt erns on AWS 15 Follow these guidelines to get the most out of Elastic Load Balancing : • Always configure Elastic Load Balancing to balance between at least tw o Availability Zones for redundancy and fault tolerance Elastic Load Balancing handles balancing traffic between the EC2 instances in the Availability Zones that you specify If you want an equal distribution of traffic on servers you should also enable cross zone load balancing even if there are an unequal number of servers per Availability Zone This ensures optimal usage of servers in your fleet • Configure Elastic Load Balancing to handle SSL encryption and decryption This offloads SSL from your HTTP servers which means that there is more CPU for your application code For more information see Create an HTTPS Load Balancer in the Classic Load Balancer Guide To test SSL for development purposes see How to Create a Self Signed SSL Certificate in the AWS Certificate Manager User Guide • Elastic Load Bala ncing automatically removes any EC2 instances that fail from its load balancing pool To ensure that the health of your HTTP EC2 instances is accurately monitored configure your load balancer with a custom health check URL Then write server code that re sponds to that URL and performs a check on your application’s health For example you could set up a simple health check that verifies that you have DB connectivity The health check return s 200 Ok if your health checks pass or 500 Server Error if your in stance is unhealthy • Each Elastic Load Balancing load balancer that you deploy must have a unique DNS name To set up a custom DNS name for your game you can use a DNS alias (CNAME) to point your game’s domain name to the load balancer For detailed instr uctions see Configure a Custom Domain Name for Your Classic Load Balancer in the Elastic Load Balancing Guide Note that when your load balanc er scales up or down the IP addresses that the load balancer uses change —make sure you are using a DNS CNAME alias to the load balancer and that you’re not referencing the load balancer’s current IP addresses in your DNS domain Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 16 • Elastic Load Balancing is designed to scale up by roughly a factor of 50 percent every 5 minutes For the vast majority of games this works well even when they suddenly go viral However if you are anticipating a sudden huge spike in traffic —perhaps due to a new downloadable con tent release or marketing promotion —Elastic Load Balancing can be pre warmed to scale up in advance for this event To pre warm Elastic Load Balancing submit an AWS support request with the anticipated load (this requires at least Business Level Support ) For more details on Elastic Load Balancing prewarming and best practices for running load tests against Elastic Load Balancing see the AWS article Best Practices in Evaluating Elastic Load Balancing Application Load Balancer Application Load Balancer is the second generation load balancer that provides more granular control over traffic routing based at the HTTP/HTTPS layer In addition to the features described in the previous section the following features that come with Application Load Balancer can be highly beneficial to a gaming centric workload: • Explicit support for Amazon EC2 Container Service (Amazon ECS) – Application Load Balancer can be configured to load balance containers across multiple ports on a single EC2 instance Dynamic ports can be specified in an ECS task definition which will gi ve the container an unused port when scheduled on EC2 instances • HTTP/2 support – A revised edition of the older HTTP/11 protocol HTTP/2 and Application Load Balancer together deliver additional network performance as a binary protocol as opposed to a t extual one Binary protocols are inherently more efficient to process and are much less error prone which can improve stability Additionally HTTP/2 supports multiplexing which enables the reuse of TCP connections for downloading content from multiple o rigins and cuts down on network overhead • Native IPv6 support – With the near exhaustion of IPv4 addresses many application providers are changing to a model where applications without IPv6 support are rejected on their services Application Load Balancer natively supports IPv6 endpoints and routing to VPC IPv6 addresses Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 17 • WebSockets support – Like HTTP/2 Application Load Balancer supports the WebSocket protocol which enables you to set up a longstanding TCP connection between a client and server This is a much more efficient method than standard HTTP connections which were usually held o pen with a sort of heartbeat which contributes to network traffic WebSocket is a great use case for delivering dynamic data like updated leaderboards while minimizing traffic and power use on a mobile device Elastic Load Balancing enables the support of WebSockets by changing the listener from HTTP to TCP However when it’s in TCP Mode Elastic Load Balancing allows the Upgrade header when a connection is established and then the Elastic Load Balancing load balancer terminates any connection that is id le for more than 60 seconds (for example a packet isn’t sent within that timeframe) This means that the client has to reestablish the connection and any WebSocket negotiation fails if the Elastic Load Balancing load balancer sends an upgrade request and establishes a WebSocket connection to other backend instances Custom load balancer Alternatively you can deploy your own load balancer to Amazon EC2 if you need specific features or metrics that Elastic Load Balancing does not provide Popular choices f or games include HAProxy and F5’s BIGIP Virtual Edition both of which can run on Amazon EC2 If you decide to use a custom load b alancer follow these recommendations: • Deploy the load balancer software (such as HAProxy) to a pair of EC2 instances each in a different Availability Zone for redundancy • Assign an Elastic IP address to each instance Create a DNS record containing both of those Elastic IP addresses as your entry point This allows DNS to round robin between your load balancer instances • If you are using Amazon Route 53 our highly available and scalable cloud Domain Name System (DNS) web service use Route 53 health checks to monitor your load balancer EC2 instances to detect failure This ensures that traffic doesn’t get routed to a load balancer that is down • If you want HAProxy to handle SSL traffic use the latest development version of HAProxy 15 or later Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 18 • If you decide to deploy your own load balancer keep in mind that there are several aspects you need to handle on your own First and foremost if your l oad surpasses what your load balancer instances can handle you need to launch additional EC2 instances and follow the previous steps to add them to your application stack In addition new auto scaled application instances aren’t automatically registered with your load balancer instances You need to write a script that updates the load balancer configuration files and restarts the load balancers If you are interested in HAProxy as a managed service consider AWS OpsWorks which uses Chef Automate to manage EC2 instances and can deploy HAProxy as an alternative to Elastic Load Balancing HTTP automatic scaling The ability to dynamically grow and shrink server resources in response to user patterns is a prima ry benefit of running on AWS Auto matic scaling enables you to scale the number of EC2 instances in one or more Availability Zones based on system metrics such as CPU utilization or network throughput For an overview of the functionality that Amazon EC2 Auto Scaling provides see What Is Amazon EC2 AutoScaling? and then walk through Getting Started with Amazon EC2 Auto Scaling You can use Amazon EC2 Auto Scaling with any type of EC2 instance including HTTP a game server or a background worker HTTP servers are the easiest to scale because they sit beh ind a load balancer that distributes requests across server instances Auto Scaling handles the registration or deregistration of HTTPbased instances from Elastic Load Balancing dynamically which means that traffic will be routed to a new instance as soo n as it’s available To use automatic scaling effectively choose appropriate metrics to trigger scale up and scale down activities To determine your metrics follow these guidelines: • CPUUtilization is often a good Amazon CloudWatch metric to use Web servers tend to be CPU limited whereas memory remains fairly constant when the server processes are running A higher percentage of CPU tends to show that the server is becoming overloaded with requests For finer granularity pair CPUUtilization with NetworkIn or NetworkOut Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 19 • Benchmark your servers to determine good values to scale on For HTTP servers you can use a tool such as Apache Bench or HTTPerf to measure your server response times Increase the load on your servers while monitoring CPU or other metrics Make note of the point at which your server response times degrade and see how this correlates to your system metrics • When configuri ng your Amazon EC2 Auto Scaling group choose two Availability Zones and a minimum of two servers This ensure s your game server instances are properly distributed across multiple Availability Zones for high availability Elastic Load Balancing takes care of balancing the load between multiple Availability Zones for you For details on configuring scaling policies see Dynamic Scaling for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide Installing application code When you use automatic scaling with Elastic Beanstalk Elastic Beanstalk takes care of installing your application code on new EC2 instances as they’re scaled up This is one of the advantages of the managed container that Elastic Beanstalk provides However if you’re using automatic scaling without Elastic Beanstalk you need to take care of getting your application code onto your EC2 instances to implement automatic scaling If y ou are already using Chef or Puppet consider using them to deploy application code on your instances AWS OpsWorks automatic scaling which uses Chef to configure instances provides both time based and load based automatic scaling With OpsWorks you can also set up custom startup and shutdown steps for your instances as they scale OpsWorks is a great alternative to managing automatic scaling if you’re already using Chef or if you’re interested in using Chef to manage your AWS resources For more inform ation see Managing Load with Time based and Load based Instances in the AWS OpsWorks User Guide If you’re not using any of these packages you can use the Ubuntu CloudInit package as a simple way to pass shell commands directly to EC2 instances You can use cloud init to run a simple shell s cript that fetches the latest application code and starts up the appropriate services This is supported by the official Amazon Linux AMI as well as the Canonical Ubuntu AMIs For more details on th ese approaches see the Running Commands on Your Linux Instance at Launch article Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 20 Game servers There are some game play scenarios that work well with an event driven RESTf ul model for example turn based play and appointment games which don't require constant real time updates can be built as stateless game servers with the techniques in the previous section Sometimes however a game server’s approach needs to be the opp osite of a RESTful approach Clients establish a stateful two way connection to the game server via UDP TCP or WebSockets enabling both the client and server to initiate messages If the network connection is interrupted the client must perform reconn ect logic and possibly logic to reset its state as well Stateful game servers introduce challenges for automatic scaling because clients can’t simply be round robin load balanced across a pool of servers Historically many games used stateful connection s and long running server processes for all of their game functionality especially in the case of larger AAA and MMO games If you have a game that is architected in this manner you can run it on AWS We offer a managed service in Amazon GameL ift that aids you in deploying operating and scaling dedicated game servers for session based multiplayer games You can also choose to run your own orchestration for game servers that uses Amazon EC2 Both are good choices depending on your requirements However for new games we encourage you to use HTTP as much as possible and only use stateful sockets for aspects of your game that really need it (such as online multiplayer) The following table lists several packages that allow you to build event driven servers Table 4: Packages to build event driven servers Language Package Nodejs Core socketio Async Python Gevent Twisted Java JBoss Netty Go Socketio Erlang Core Ruby Event Machine Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 21 C++ isn’t listed in the table because it tends to be the language of choice for multiplayer game servers Many commercial game engines such as Amazon Lumberyard and Unreal Engine are written in C++ This enables you to take exi sting game code from the client and reuse it on the server This is particularly valuable when running physics or other frameworks on the server (such as Havok) which frequently only support C++ However though there are packages that allow building even t driven services they tend to be more complex than those in the above list Also you wouldn't typically be running game simulation code in an event based service Regardless of programming language stateful socket servers generally benefit from as large an instance as possible since they are more sensitive to issues such as network latency The largest instances in the Amazon EC2 compute optimized instance family (for example c5*) are often the best options These new generation instances use enhance d networking via single root I/O virtualization (SR IOV) which provides high packets per second lower latency and low jitter This makes them ideal for game servers Matchmaking Matchmaking is the feature that gets players into games Typically matchm aking follows a process like the following: 1 Ask the user about the type of game they would like to join (for example deathmatch time challenge etc) 2 Look at what game modes are currently being played online 3 Factor in variables such as the user's geo location (for latency) or ping time language and overall ranking 4 Place the user on a game server that contains a matching game Games servers require long lived processes and they can't simply be round robin load balanced in the way that you can with a n HTTP request After a player is on a given server they remain on that server until the game is over which could be minutes or hours In a modern cloud architecture you should minimize your usage of long running game server processes to only those game play elements that require it For example imagine an MMO or open world shooter game Some of the functionality such as running around the world and interacting with other players requires long running game server processes However the rest of the API operations such as listing friends altering Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 22 inventory updating stats and finding games to play can easily be mapped to a REST web API In this approach game clients would first connect to your REST API and request a stateful game server Your REST API would then perform matchmaking logic and give clients an IP address and port of a server to connect to The game client then connects directly to that game server’s IP address This hybrid approach gives you the best performance for your socket servers because clients can directly connect to the EC2 instances At the same time you still get the benefits of using HTTP based calls for your main entry point For most matchmaking n eeds Amazon GameLift provides a matchmaking system called FlexMatch You would control FlexMatch via your REST API making calls to the Amazon GameLift API to initiate matching and return results You can find more information on FlexMatch in the Amazon GameLift Developer Guide If FlexMatch doesn't suit you needs for matchmaking you can find more information about implementing matchmaking in a custom serverless e nvironment in Fitting the Pattern: Serverless Custom Matchmaking with Amazon GameLift on the AWS Game Tech Blog Routing messages with Amazon SNS There are two main categories of messages in gaming: messages targeted at a specific user like private chat or trade requests and group messages such as chat or gameplay packets A common strategy for sending and receiving messages is t o use a socket server with a stateful connection If your player base is small enough so that everyone can connect to a single server you can route messages between players simply by selecting different sockets In most cases though you need to have mul tiple servers which means those servers also need some way to route messages between themselves Routing messages between EC2 server instances is one use case where Amazon SNS can help Let’s assume you had player 1 on server A who wants to send a messag e to player 2 on server C as shown in the following figure In this scenario server A could look at locally connected players and when it can’t find player 2 server A can forward the message to an SNS topic which then propagates the message to other s ervers Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 23 Figure 3: SNSbacked player to player communication between two servers Amazon SNS fills a role here that is similar to a message queue such as RabbitMQ or Apache ActiveMQ Instead of Amazon SNS you could run RabbitMQ A pache ActiveMQ or a similar package on Amazon EC2 The advantage of Amazon SNS is that you don’t have to spend time administering and maintaining queue servers and software on your own For more information about Amazon SNS see What is Amazon Simple Notification Service? and Create a Topic in the Amazon SNS Developer Guide Mobile push notifications Unlike the previous use case which is designed to handle near real time in game messaging mobile push is best choice for sending a user a message when they are out of game to draw them back in An example might be a user specific event such as a friend beating your high score or a broader game event such as a Double XP Weekend Although Amazon SNS supports the ability to send push notifications directly to mobile clients a better choice wo uld be Amazon Pinpoint which provides not just mobile push notifications but also e mail voice messages and SNS messaging allowing a player pleasing multiple channel notification solution Last thoughts o n game servers It’s easy to become obsessed with finding the perfect programming framework or pattern Both RESTful and stateful game servers have their place and any of the languages discussed previously will work well if programmed thoughtfully More Amazon Web Services Introduction to Scalable Game Development Pa tterns on AWS 24 importantly you need to spend time thinking about your overall game data architecture —where data lives how to query it and how to efficiently update it Relational vs NoSQL databases The advent of horizontally scaled applications has changed the applicat ion tier and the traditional approach of a single large relational database A number of new databases have become popular that eschew traditional Atomicity Consistency Isolation and Durability (ACID) concepts in favor of lightweight access distribute d storage and eventual consistency These NoSQL databases can be especially beneficial for games where data structures tend to be lists and sets (for example friends levels items) as opposed to complex relational data As a general rule the biggest b ottleneck for online games tends to be database performance A typical web based app has a high number of reads and few writes Think of reading blogs watching videos and so forth Games are quite the opposite with reads and writes frequently hitting th e database due to constant state changes in the game There are many database options out there for both relational and NoSQL flavors but the ones used most frequently for games on AWS are Amazon Aurora Amazon Elasti Cache for Redis Amazon DynamoDB Amaz on RDS for MySQL and Amazon DocumentDB (with MongoDB compatibility ) First we’ll cover MySQL because it’s applicable to gaming and remains very popular Combinations such as MySQL and Redis or MySQL and DynamoDB are very successful on AWS All of the da tabase alternatives described in this section support atomic operations such as increment and decrement which are crucial for gaming MySQL As an ACID compliant relational database MySQL has the following advantages: • Transactions – MySQL provides support for grouping multiple changes into a single atomic transaction that must be committed or rolled back NoSQL stores typically lack multi step transactional functionality • Advanced querying – Since MySQL speaks SQL this provides the flexibility to perform complex queries that evolve over time NoSQL databases typically only support access by key or a single secondary index This means you must make careful data design decisions up front Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 25 • Single source of truth – MySQL guarantees data consistency internally Part of what makes many NoSQL solutions faster is distributed storage and eventual consistency (Eventual consistency means you could write a key on one node fetch that key on another node and have it not be there immediately) • Extensive tools – MySQL h as extensive debugging and data analysis tools available for it In addition SQL is a general purpose language that is widely understood These advantages continue to make MySQL attractive especially for aspects of gaming such as account records in app purchases and similar functionality where transactions and data consistency are paramount Even gaming companies that are leveraging NoSQL offerings such as Redis and DynamoDB frequently continue to put transactional data such as accounts and purchases in MySQL If you’re using MySQL on AWS we recommend that you use Amazon RDS to host MySQL because it can save you valuable deployment and support cycles Amazon RDS for MySQL automates the time consuming aspects of database management such as launching EC2 instances configuring MySQL attaching Amazon Elastic Block Store (Amazon EBS) volumes setting up replication running nightly backups and so on In addition Amazon RDS offers advanced features including synchronous Multi AZ replication f or high availability automated primary/ replica failover and read replicas for increased performance To get started with Amazon RDS see Getting Started with Amazon RDS The following table includes some configuration options that we recommend you implement when you create your RDS MySQL DB instances Table 5: Recommended settings per env ironment Option Development/Test Production DB instance class Micro Medium or larger Multi AZ deployment No Yes (enables synchronous Multi AZ replication and failover) For best performance always launch production on an RDS DB instance that is separate from any of your Amazon RDS development/test DB instances Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 26 Option Development/Test Production Auto Minor Version Upgrade Yes Yes Allocated Storage 5 GB 100 GB minimum (to enable Provisioned IOPS) Use Provisioned IOPS N/A Yes Provisioned IOPS guarantees you a certain level of di sk performance which is important for large write loads For more information about PIOPS see Amazon RDS Provisioned IOPS Storage to Improve Performa nce Consider these additional guidelines when you create your RDS MySQL DB instances: • Schedule Amazon RDS backup snapshots and upgrades during your low player count times such as early morning If possible avoid running background jobs or nightly reports during this window to prevent a query backlog • To find and analyze slow SQL queries in production ensure you have enabled the MySQL slow query log in Amazon RDS as shown in the following list These settings are configured using Amazon RDS DB Parameter Groups Note that there is a minor performance penalty for the slow query log o Set slow_query_log = 1 to enable In Amazon RDS slow quer ies are written to the mysqlslow_log table o The value set in long_query_time determines that only queries that take longer than the specified number of seconds are included The default is 10 Consider decreasing this value to 5 3 or even 1 o Make sure t o periodically rotate the slow query log as described in Common DBA Tasks for MySQL DB Instances in the Amazon RDS User Guide As your game grows and your write load increases resize your RDS DB instances to scale up Resizing an RDS DB instance requires some downtime but if you deploy it in Multi AZ mode as you would for production this is limited to the time it takes to initiate a failover (typical ly a few minutes) For more information see Modifying a DB Instance Running the MySQL Database Engine in the Amazon RDS User Guide In addition you Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 27 can add one or more Amazon RDS read replicas to offload reads from your master RDS instance leaving more cycles for database writes For instructions on deploying replicas with Amazon RDS see Working with Read Replicas Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed and availability of high end commercial databases with the simplicity and cost effectiveness of open so urce databases There are several key features that Amazon Aurora brings to a gaming workload: • High performance – Amazon Aurora is designed to provide up to 5x the throughput of standard MySQL running on the same hardware This performance is on par with c ommercial databases for a significantly lower cost On the largest Amazon Aurora instances it’s possible to provide up to 500000 reads and 100000 writes per second with 10 millisecond latency between read replicas • Data durability – In Amazon Aurora each 10 GB chunk of your database volume is replicated six ways across three Availability Zones allowing for the loss of two copies of data without affecting database write availability and three copies without affecting read availability Backups are do ne automatically and continuously to Amazon S3 which is designed for 99999999999% durability with a retention period of up to 35 days You can restore your database to any second during the retention period up to the last five minutes • Scalability – Amazon Aurora is capable of automatically scaling its storage subsystem out to 64 TB of storage This storage is automatically provisioned for you so that you don’t have to provision storage ahead of time As an added benefit this means you pay only fo r what you use reducing the costs of scaling Amazon Aurora also can deploy up to 15 read replicas in any combination of Availability Zones including cross Region where Amazon Aurora is available This allows for seamless failover in case of an instance failure The following are some recommendations for using Amazon Aurora in your gaming workload: • Use the following DB instance classes: t2small instance in you development/test environments and r3large or larger instance in you production environment • Deploy read replicas in at least one additional Availability Zone to provide for failover and read operation offloading Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 28 • Schedule Amazon RDS backup snapshots and upgrades during low player count times If possible avoid running jobs or reports against the d atabase during this window to prevent backlogging If your game grows beyond the bounds of a traditional relational database like MySQL or Amazon Aurora we recommend that you perform a performance evaluation including tuning parameters and sharding In a ddition you should look at using a NoSQL offering such as Redis or DynamoDB to offload some workloads from MySQL In the following sections we’ll cover a few popular NoSQL offerings Redis Best described as an atomic data structure server Redis has so me unique features not found in other databases Redis provides foundational data types such as counters lists sets and hashes which are accessed using a high speed text based protocol For details on available Redis data types see the Redis data type documentation and An introduction to Redis data types and abstractions These unique data types make Redis an ideal choice for leaderboards game l ists player counts stats inventories and similar data Redis keeps its entire data set in memory so access is extremely fast For comparisons with Memcached see Redis Benchmarks There are a few cavea ts concerning Redis that you should be aware of First you need a large amount of physical memory because the entire dataset is memoryresident (that is there is no virtual memory support) Replication support is also simplistic and debugging tools for R edis are limited Redis is not suitable as your only data store But when used in conjunction with a disk backed database such as MySQL or DynamoDB Redis can provide a highly scalable solution for game data Redis plus MySQL is a very popular solution fo r gaming Redis uses minimal CPU but it uses lots of memory As a result it’s best suited to high memory instances such as the Amazon EC2 memory optimized instance family (that is r3*) AWS offers a fully managed Redis service Amazon ElastiCache for Redis ElastiCache for Redis can handle clustering primary/replica replication backups and many other common Redis maintenance tasks For a deep dive on getting the most out of ElastiCache see the AWS whitepaper Performance at Scale with Am azon ElastiCache MongoDB MongoDB is a document oriented database which means that data is stored in a nested data structure similar to a structure you would use in a typical programming Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 29 language MongoDB uses a binary variant of JSON called BSON for com munication which makes programming against it a matter of storing and retrieving JSON structures This has made MongoDB popular for games and web applications since server APIs are usually JSON too MongoDB also offers a number of interesting hybrid fea tures including a SQL like syntax that enables you to query data by range and composite conditions MongoDB supports atomic operations such as increment/decrement and add/remove from list; this is similar to Redis support for these operations For example s of atomic operations that MongoDB supports see the MongoDB documentation on findAndModify MongoDB is widely used as a primary data store for games and is frequently used in conjunction with Redis since the two complement each other well Transient game data sessions leaderboards and counters are kept in Redis and then progress is saved to MongoDB at logical points (for example at the end of a level or when a new achievement is unlocked) Redis yields high speed access for latency sensitive game data and MongoDB provides simplified persistence MongoDB supports native replication and sharding as well although you do have to configure and monitor these features yourse lf For an in depth look at deploying MongoDB on AWS see the AWS whitepaper MongoDB on AWS Amazon DocumentDB (with MongoDB compa tibility ) is a fully managed document database service that supports MongoDB workloads It's designed for high availability performance at scale and is highly secure Amazon DynamoDB Finally DynamoDB is a fully managed NoSQL solution provided by AWS Dy namoDB manages tasks such as synchronous replication and IO provisioning for you in addition to automatic scaling and managed caching DynamoDB uses a Provisioned Throughput model where you specify how many reads and writes you want per second and the rest is handled for you under the hood To set up DynamoDB see the Getting Started Guide Games frequently use DynamoDB features in the following w ays: • Keyvalue store for user data items friends and history • Range key store for leaderboards scores and date ordered data • Atomic counters for game status user counts and matchmaking Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 30 Like MongoDB and MySQL DynamoDB can be paired with a technolog y such as Redis to handle real time sorting and atomic operations Many game developers find DynamoDB to be sufficient on its own but the point is you still have the flexibility to add Redis or a caching layer to a DynamoDB based architecture Let’s revis it our reference diagram with DynamoDB to see how it simplifies the architecture Figure 4: A fully production read game backend running on AWS using DynamoDB Table structure and queries DynamoDB like MongoDB is a loosely structured NoSQL data store that allows you to save different sets of attributes on a per record basis You only need to predefine the primary key strategy you’re going to use: Amazon Web Services Introduction t o Scalable Game Development Patterns on AWS 31 • Partition key – The partition key is a single attribute that DynamoDB u ses as input to an internal hash function This could be a player name game ID UUID or similar unique key Amazon DynamoDB builds an unordered hash index on this key • Partition key and sort key – Referred to as a composite primary key this type of key is composed of two attributes: the partition key and the sort key DynamoDB uses the partition key value as input to an internal hash function and all items with the same partition key are stored together in sorted order by sort key value For example y ou could store game history as a duplet of [user_id last_login] Amazon DynamoDB builds an unordered hash index on the partition key attribute and a sorted range index on the sort key attribute Only the combination of both keys is unique in this scenari o For best querying performance you should maintain each DynamoDB table at a manageable size For example if you have multiple game modes it’s better to have a separate leaderboard table for each game mode rather than a single giant table This also g ives you the flexibility to scale your leaderboards separately in the event that one game mode is more popular than the others Provisioned throughput DynamoDB shards your data behind the scenes to give you the throughput you’ve requested DynamoDB uses t he concept of read and write units One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second for an item up to 4 KB in size One write capacity unit represents one write per second for an ite m up to 1 KB in size The defaults are 5 read and 5 write units which means 20 KB of strongly consistent reads/second and 5 KB of writes/second You can increase your read and or write capacity at any time by any amount up to your account limits You can also decrease the read and or write capacity by any amount but this can’t exceed more than four decreases in one day Scaling can be done using the AWS Management Console or AWS CLI by selecting the table and modifying it appropriately You can also take a dvantage of DynamoDB Auto Scaling by using the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf in r esponse to actual traffic patterns DynamoDB Auto Scaling works in conjunction with Amazon CloudWatch alarms that monitor the capacity units It scales according to your defined rules There is a delay before the new provisioned throughput is available wh ile data is repartitioned in the background This doesn’t cause downtime but it does mean that the DynamoDB scaling is best suited for changes over time such as the growth of a game Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 32 from 1000 to 10000 users It isn’t designed to handle hourly user spik es For this as with other databases you need to leverage some form of caching to add resiliency To get the best performance from DynamoDB make sure your reads and writes are spread as evenly as possible across your keys Using a hexadecimal string suc h as a hash key or checksum is one easy strategy to inject randomness For more details on optimizing DynamoDB performance see Best Practices for DynamoDB in the Amazon DynamoDB Developer Guide Amazon DynamoDB Accelerator (DAX) DAX allows you to provision a fully managed in memory cache for DynamoDB that speeds up the responsiveness of your DynamoDB tables from millisecond scale latency to microseconds This acceleration comes without requiring any major changes in your game code which simplifies deployment into your architecture All you have to do is re initialize your DynamoDB client with a new endpoint that points to DAX and the rest of the code can remain untouched DAX handles cache invalidation and data population without your intervention This cache can help speed responsiveness when running events that might cause a spike in players such as a seasonal DLC offering or a new patch release Other NoSQL options There are a number of other NoSQL alternatives including Riak Couchbase and Cassandra You can use any of these for gaming and there are examples of gaming companies using them on AWS with success As with choosing a server programming language there is no perfect database —you need to weigh the pros and cons of each one Caching For gaming adding a caching layer in front of your database for frequently used data can alleviate a significant number of scalability problems E ven a short lived cache of just a few seconds for data such as leaderboards friend lists and recent activity can greatly offload your database Adding cache servers is also cheaper than adding additional database servers so it also lowers your AWS costs Memcached is a high speed memory based key value store that is the gold standard for caching Redis features similar performance to Memcached plus Redis has advanced data types Both options perform well on AWS You can choose to install Memcached or Redis on EC2 instances yourself or you can use Amazon ElastiCache Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 33 the AWS managed caching service Like Amazon RDS and DynamoDB ElastiCache completely automates the installation configuration and management of Memcached and Redis on AWS For more details on setting up ElastiCache see Gettin g Started with Amazon ElastiCache in the Amazon ElastiCache User Guide ElastiCache groups servers in a cluster to simplify management Most ElastiCache operations like configuration security and parameter changes are performed at the cache cluster level Despite the use of the cluster terminology ElastiCache nodes do not talk to each other or share cache data ElastiCache deploys the same versions of Memcache and Redis that you would download yourself so existing client libraries written in Ruby Java PHP Python and so on are completely compatible with ElastiCache The typical approach to caching is known as lazy population or cache aside This means that the cache is checked and if the value is not in cache (a cache miss) the record is retrieved stored in cache and returned The following Python example checks ElastiCache for a value queries the database if the cache doesn’t have it and then stores the value back to ElastiCache for subsequent queries Lazy population is the most prevalent cachi ng strategy because it only populates the cache when a client actually requests the data This way it avoids extraneous writes to the cache in the case of records that are infrequently (or never) accessed or that change before being read This pattern is so ubiquitous that most major web development frameworks such as Rails Django and Grails include plugins that wrap this strategy The downside to this strategy is that when data changes the next client that requests it incurs a cache miss which means t hat their response time is slower because the new record needs to be queried from the database and populated into cache This downside leads us to the second most prevalent caching strategy For data that you know will be accessed frequently populate the cache when records are saved to avoid unnecessary cache misses This means that client response times will be faster and more uniform In this case you simply populate the cache when you update the record rather than when the next client queries it The tradeoff here is that it could result in an unnecessarily high number of cache writes if your data is changing rapidly In addition writes to the database can appear slower to users since the cache also needs to be updated To choose between these two st rategies you need to know how often your data is changing versus how often it's being queried Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 34 The final popular caching alternative is a timed refresh This is beneficial for data feeds that span multiple different records such as leaderboards or friend lists In this strategy you would have a background job that queries the database and refreshes the cache every few minutes This decreases the write load on your cache and enables additional caching to happen upstream (for example at the CDN layer) bec ause pages remain stable longer Amazon ElastiCache scaling ElastiCache simplifies the process of scaling your cache instances up and down ElastiCache provides access to a number of Memcached metrics in CloudWatch at no additional charge You should set C loudWatch alarms based on these metrics to alert you to cache performance issues You can configure these alarms to send emails when the cache memory is almost full or when cache nodes are taking a long time to respond We recommend that you monitor the f ollowing metrics: • CPUUtilization – How much CPU Memcached or Redis is using Very high CPU could indicate an issue • Evictions – Number of keys that have to be forced out of memory due to lack of space Should be zero If it’s not near zero you need a larger ElastiCache instance • GetHits/CacheHits and GetMisses/CacheMisses – How frequently does your cache have the keys you need? The higher percentage of hits the more you’re offloading your database • CurrConnections – The number of clients that are currently connected (this depends on the application) In general monitoring hits misses and evictions is sufficient for most appl ications If the ratio of hits to misses is too low you should revisit your application code to make sure your cache code is working as expected As mentioned typically evictions should be zero 100 percent of the time If evictions are nonzero either sc ale up your ElastiCache nodes to provide more memory capacity or revisit your caching strategy to ensure you’re only caching what you need to cache Additionally you can configure your cache node cluster to span multiple Availability Zones to provide hig h availability for your game’s caching layer This ensures that in the event of an Availability Zone being unavailable your database is not overwhelmed by a sudden spike in requests When creating a cache cluster or adding nodes to an existing cluster yo u can chose the Availability Zones for the new nodes You can either specify Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 35 the requested number of nodes in each Availability Zone or select the option to spread nodes across zones With Amazon ElastiCache for Redis you can create a read replica in anoth er Availability Zone Upon a failure of the primary node AWS provisions a new primary node In scenarios where the primary node cannot be provisioned you can decide which read replica to promote to be the new primary ElastiCache for Redis also supports Sharded Cluster with supported Redis Engines version 3 or higher You can create clusters with up to 15 shards expanding the overall inmemory data store to more than 35 TiB Each shard can have up to 5 read replicas giving you the ability to handle 20 million reads and 45 million writes per second The sharded model in conjunction with the read replicas improves overall performance and availability Data is spread across multiple nodes and the read replicas support rapid automatic failover in the ev ent that a primary node has an issue To take advantage of the sharded model you must use a Redis client that is cluster aware The client will treat the cluster as a hash table with 16384 slots spread equally across the shards and will then map the inc oming keys to the proper shard ElastiCache for Redis treats the entire cluster as a unit for backup and restore purposes You don’t have to think about or manage backups for the individual shards Binary game content with Amazon S3 Your database is respons ible for storing user data including accounts stats items purchases and so forth But for game related binary data Amazon S3 is a better fit Amazon S3 provides a simple HTTP based API to PUT (upload) and GET (download) files With Amazon S3 you pay only for the amount of data that you store and transfer Using Amazon S3 consists of creating a bucket to store your data in and then making HTTP requests to and from that bucket For a walkthrough of the proces s see Create a Bucket in the Amazon S3 Getting Started Guide Amazon S3 is ideally suited for a variety of gaming use cases including the following: • Content downloads – Game assets maps patches and betas • User generated files – Photos avatars user created levels and device backups • Analytics – Storing metrics device logs and usage patterns Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 36 • Cloud saves – Game save data and syncing between devices ( AWS AppSync would be a good choice as well) Although you can store this type of data in a database using Amazon S3 has a number of advantages including the following: • Storing binary data in a DB is memory and dis k intensive consuming valuable query resources • Clients can directly download the content from Amazon S3 using a simple HTTP/S GET • Designed for 99999999999% durability and 9999% availability of objects over a given year • Amazon S3 natively supports feat ures such as ETag authentication and signed URLs • Amazon S3 plugs into the Amazon CloudFront CDN for distributing content quickly to large numbers of clients With these factors in mind let’s look at th e aspects of Amazon S3 that are most relevant for gaming Content delivery and Amazon CloudFront Downloadable content (DLC) is a huge aspect of modern games from an engagement perspective and it is becoming a primary revenue stream Users expect an ongoin g stream of new characters levels and challenges for months —if not years —after a game’s release Being able to deliver this content quickly and cost effectively has a big impact on the profitability of a DLC strategy Although the game client itself is t ypically distributed through a given platform’s app store pushing a new version of the game just to make a new level available can be onerous and time consuming Promotional or time limited content such as Halloween themed assets or a long weekend tourna ment are usually easier to manage yourself in a workflow that mirrors the rest of your server infrastructure If you’re distributing content to a large number of clients (for example a game patch expansion or beta) we recommend that you use Amazon Cl oudFront in front of Amazon S3 CloudFront has points of presence (POPs) located throughout the world which improves download performance In addition you can configure which Regions Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 37 CloudFront serves to optimize your costs For more information see the CloudFront FAQ in particular How does CloudFront lower my costs? Finally if you anticipate significant CloudFront usage you should contact our CloudFront sales team because Amazon offers reduced pricing that is even lower than our on demand pricing for high usage customers Easy versioning with ETag As mentioned earlier Amazon S3 supports HTTP ETag and the If None Match HTTP header which are well known to web developers but frequently overlooked by game developers These headers enable you to send a request for a piece of Amazon S3 content and include the MD5 checksum of the version you already have If you already have the latest version Amazon S3 responds with an HTTP 304 Not Modified or HTTP 200 along with the file data if you need it Leveraging ETa g in this manner makes any future use of CloudFront more powerful because CloudFront also supports the Amazon S3 ETag For more information see Request and Response Behavior for Amazon S3 Origins in the Amazon CloudFront Developer Guide Finally you also have the ability to Geo Target or Restrict access to your content through CloudFront’s Geo Targeting feature Amazon CloudFront dete cts the country where your customers are located and will forward the country code to your origin servers This allows your origin server to determine the type of personalized content that will be returned to the customer based on their geographic location This content could be anything from a localized dialog file for an RPG to localized asset packs for your game Uploading content to Amazon S3 Our other gaming use cases for Amazon S3 revolve around uploading data from the game be it user generated conte nt analytics or game saves There are two strategies for uploading to Amazon S3: either upload directly to Amazon S3 from the game client or upload by first posting to your REST API servers and then have your REST servers upload to Amazon S3 Although both methods work we recommend uploading directly to Amazon S3 if possible since this offloads work from your REST API tier Uploading directly to Amazon S3 is straightforward and can even be accomplished directly from a web browser For more informatio n see Browser Based Uploads Using POST (AWS Signature Version 2) in the Amazon S3 Developer Guide You can even Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 38 create secure URLs for players to upload content ( such as from an out of game tool) using pre signed URLs To protect against corruption you should also consider calculating an MD5 checksum of the file and including it in the Content MD5 header This approach enable s Amazon S3 to automatically verify the file was not corrupted during upload For more information see PUT Object in the Amazon S3 API Reference User generated content (UGC) is a great use case for uploading data to Amazon S3 A typical piece of UGC has two parts: binary content (for example a graphic asset) and its metadata (for example name date author tags etc) The us ual pattern is to store the binary asset in Amazon S3 and then store the metadata in a database Then you can use the database as your master index of available UGC that others can download The following figure shows an example call flow that you can use to upload UGC to Amazon S3 Figure 5: A simple workflow for transfer of game content In this example first you PUT the binary game asset (for example avatar level etc) to Amazon S3 which creates a new object in Amazon S3 After you receive a success response from Amazon S3 you make a POST request to our REST API layer with the metadata for that asset The REST API needs to have a service that accepts the Amazon S3 key name plus any metadata you want to keep an d then it stores the key name and the metadata in the database The game’s other REST services can then query the database to find new content popular downloads and so on This simple call flow handles the case where the asset data is stored verbatim in Amazon S3 which is usually true of user generated levels or characters This same Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 39 pattern works for game saves as well —store the game save data in Amazon S3 and then index it in your database by user_id date and any other important metadata If you nee d to do additional processing of an Amazon S3 upload (for example generating preview thumbnails) make sure to read the section on Asynchronous Jobs later in this book In that section we’ll discuss adding Amazon SQS to queue jobs to handle these types o f tasks Analytics and A/B testing Collecting data about your game is one of the most important things you can do and one of the easiest as well Perhaps the trickiest part is deciding what to collect You should consider keeping track of any reasonable m etrics you can think of for a user (for example total hours played favorite characters or items current and highest level etc) If you aren’t sure what to measure or if you have a client that is not updated easily Amazon S3 is a popular choice for st oring raw metrics data as it can be very cost effective However if you are able to formulate questions that you want answered beforehand or if client updates are simple to distribute you can focus on gathering the data that help you answer those specifi c questions After you’ve identified the data follow these steps to track it: 1 Collect metrics in a local data file on the user’s device (for example mobile console PC etc) To make things easier later we recommend using a CSV format and a unique fil ename For example a given user might have their data tracked in 241 game_name user_idYYYYMMDDHHMMSScsv or something similar 2 Periodically persist the data by having the client upload the metrics file directly to Amazon S3 Or you can integrate with Ama zon Kinesis and adopt a loosely coupled architecture as we discussed previously When you go to upload a given data file to Amazon S3 open a new local file with a new file name This keeps the upload loop simple Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 40 3 For each file you upload put a record so mewhere indicating that there’s a new file to process Amazon S3 event notifications provide an excellent way to support this To enable notifications you must first ad d a notification configuration identifying the events you want Amazon S3 to publish such as a file upload and the destinations where you want Amazon S3 to send the event notifications We recommend Amazon SQS because you can then have a background worker listening to Amazon SQS for new files and processing them as they arrive For more details see the Amazon SQS section in this book 4 As part of a background job process the data using a framework such as Amazon EMR or other framework that you choose to run on Amazon EC2 This background process can look at new data files that have been uploaded since the last run and perform aggregation or other operations on the data (Note tha t if you’re using Amazon EMR you may not need step #3 because Amazon EMR has built in support for streaming new files) 5 Optionally feed the data into Amazon Redshift for additional data warehousing and ana lytics flexibility Amazon Redshift is an ANSI SQL compliant columnar data warehouse that you pay for by the hour This enables you to perform queries across large volumes of data such as sums and min/max using familiar SQLcompliant tools Repeat these steps in a loop uploading and processing data asynchronously The following figure shows how this pattern works Figure 6: A simple pipeline for analytics and A/B testing Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 41 For both analytics and A/B testing the data flow tends to be unidirectional That is metrics flow in from users are processed and then a human makes decisions that affect future content releases or game features Using A/B testing as an example when you present users with different items screens and so f orth you can make a record of the choice they were given along with their subsequent actions (for example purchase cancel etc) Then periodically upload this data to Amazon S3 and use Amazon EMR to create reports In the simplest use case you coul d just generate cleaned up data from Amazon EMR in CSV format into another Amazon S3 bucket and then load this into a spreadsheet program For more information on analytics and Amazon EMR see Data Lakes and Analytics on AWS and the Amazon EMR Documentation Amazon Athena Gleaning insights quickly and cheaply is one of the best ways that developers can improve on their games Traditionally this has been relatively difficult because data normally has to be extracted from game application servers stored somewhere transformed and then loaded into a database in order to be queried later This process can take a significa nt amount of time and compute resources greatly increasing the cost of running such tasks Amazon Athena assists with your analytical pipeline by providing the means of querying data stored in Amazon S3 using standard SQL Because Athena is serverless th ere is no infrastructure to provision or manage and generally there is no requirement to transform data before applying a schema to start querying However keep the following points in mind to optimize performance while using Athena for your queries: • Ad hoc queries – Because Athena is priced at a base of $5 per TB of data scanned this means that you incur no charges when there aren’t any queries being run Athena is ideally suited for running queries on an ad hoc basis when information must be gleaned fr om data quickly without running an extract transform and load (ETL) process first • Proper partitioning – Partitioning data divides tables into parts that keep related entries together Partitions act as virtual columns You define them at table creation and they can help reduce the amount of data scanned per query thereby improving performance and reduci ng the cost of any particular query You can restrict the amount of data scanned by a query by specifying filters based on the partition Amazon Web Services Introduction to Scalable Game Developmen t Patterns on AWS 42 For example in the following query: SELECT count(*) FROM lineitem WHERE l_gamedate = '2019 1031' A non partitione d table would have to scan the entire table looking through potentially millions of records and gigabytes of data slowing down the query and adding unnecessary cost A properly partitioned table can help speed queries and significantly reduce cost by cutting the amount of data queried by Athena For a detailed example see Top 10 Performance Tuning Tips for Amazon Athena on the AWS Big Data Blog • Com pression – Just like partitioning proper compression of data can help reduce network load and costs by reducing data size It’s also best to make sure that the compression algorithm you choose allows for splittable files so Athena’s execution engine can i ncrease parallelism for additional performance • Presto knowledge –Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes Athena uses Presto and therefore understanding Presto can help you to optimize the various queries that you run on Athena For example the ORDER BY clause returns the results of a query in sort order To do the sort Presto must send all rows of data t o a single worker and then sort them This could cause memory pressure on Presto which could cause the query to take a long time to execute Worse the query could fail If you are using the ORDER BY clause to look at the top or bottom N values then use a LIMIT clause to reduce the cost of the sort significantly by pushing the sorting and limiting to individual workers rather than the sorting being done in a single worker Amazon S3 performance considerations Amazon S3 can scale to tens of thousands of P UTs and GETs per second To achieve this scale there are a few guidelines you must follow to get the best performance out of Amazon S3 First as with DynamoDB make sure that your Amazon S3 key names are evenly distributed because Amazon S3 determines ho w to partition data internally based on the first few characters in the key name Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 43 Let's assume your bucket is called mygameugc and you store files based on a sequential database ID: http://mygame ugcs3amazonawscom/10752dat http://mygame ugcs3amazon awscom/10753dat http://mygame ugcs3amazonawscom/10754dat http://mygame ugcs3amazonawscom/10755dat In this case all of these files would likely live in the same internal partition within Amazon S3 because the keys all start with 107 This limit s your scalability because it results in writes that are sequentially clustered together A solution is to use a hash function to generate the first part of the object name in order to randomize the distribution of names One strategy is to use an MD5 or SHA1 of filename and prefix the Amazon S3 key with that as shown in the following code example: http://mygame ugcs3amazonawscom/988 10752dat http://mygame ugcs3amazonawscom/483 10753dat http://mygame ugcs3amazonawscom/d5d 10754dat http://myga meugcs3amazonawscom/06f 10755dat Here’s a variation with a Python SHA1 example: #!/usr/bin/env python import hashlib sha1 = hashlibsha1(filename)hexdigest()[0:3] path = sha1 + " " + filename For more information about maximizing S3 performance see Best Practices Design Patterns: Optimizing Amazon S3 Performance in the Amazon S3 Developer Guide If you anticipate a particularly high PUT or GET load file an AWS Support Ticket Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 44 Loosely coupled architectures with asynchronous jobs Loosely coupled architectures that involve decoupling components refers to the concept of designing your server components so that they can operate as independently as possible A common approach is to put queues between services so that a sudden burst of activity on one part of your system doesn’t cascade to other parts Some aspects of gaming are difficult to dec ouple because data needs to be as up todate as possible to provide a good matchmaking and gameplay experience for players However most data such as cosmetic or character data doesn’t have to be up tothe millisecond Leaderboards and avatars Many gam ing tasks can be decoupled and handled in the background For example the task of a user updating his stats must be done in real time so that if a user exits and then re enters the game they won’t lose progress However re ranking the global top 100 le aderboard isn’t required every time a user posts a new high score Most users appear far down the leaderboard Instead the ranking process c an be decoupled from score posting and performed in the background every few minutes This approach has minimal im pact on the game experience because game ranks are highly volatile in any active online game As another example consider allowing users to upload a custom avatar for their character In this case your front end servers put a message into a queue such as Amazon SQS about the new avatar upload You write a background job that runs periodically pulls avatars off the queue processes them and marks them as available in MySQL Aurora DynamoDB or whatever database you’re using The background job runs on a different set of EC2 instances which can be set up to auto matically scale just like your front end servers To help you get started quickly Elastic Beanstalk provides worker environments that simplify this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you This approach is an effective way to decouple your front end servers fr om backend processing and it enables you to scale the two independently For example if the image resizing is taking too long you can add additional job instances without needing to scale your REST servers too Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 45 The rest of this section focuses on Amazo n SQS Note that you could implement this pattern with an alternative such as RabbitMQ or Apache ActiveMQ deployed to Amazon EC2 instead Amazon SQS Amazon SQS is a fully managed queue solution with a long pollin g HTTP API This makes it easy to interface with regardless of the server languages you’re using To get started with Amazon SQS see Gettin g Started with Amazon SQS in the Amazon SQS Developer Guide Here are a few tips to best use Amazon SQS: • Create your SQS queues in the same Region as your API servers to make writes as fast as possible Your asynchronous job workers can live in any Region because they are not time dependent This enables you to run API servers in Regions near your users and job instances in more economical Regions • Amazon SQS is designed to scale horizontally A given Amazon SQS client can process about 50 requests a second The more Amazon SQS client processes you add the more messages you can process concurrently For tips on adding additional worker processes and EC2 instances see Increasing Throughput with Horizontal Scaling and Action Batching in the Amazon SQS Developer Guide • Consider using Amazon EC2 Spot Instances for your job workers to save money Amazon SQS is designed to redeliver messages that aren’t explicitly deleted which protects against EC2 instances going away mid job Make sure to delete messages only after you have completed processing them This enables an other EC2 instance to retry the job if a given instance fails while running • Consider message visibility which you can think of this as the redelivery time if a message is not deleted The default is 30 seconds You may need to increase this if you have l ongrunning jobs to avoid multiple queue readers from receiving the same message • Amazon SQS also supports dead letter queues A dead letter queue is a queue that other (source) queues can target for messages that can't be processed (consumed) successfull y You can set aside and isolate these messages in the dead letter queue to determine why their processing doesn't succeed Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 46 In addition Amazon SQS has the following caveats: • Messages are not guaranteed to arrive in order You may receive messages in rando m order (for example 2 3 5 1 7 6 4 8) If you need strict ordering of messages see the following FIFO Queues section • Messages typically arrive quickly but occasionally a message may be delayed by a few minutes • Messages can be duplicated and it's the responsibility of the client to de duplicate Taken together this information means that you must make sure your asynchronous jobs are coded to be idempotent and resilient to delays Resizing and replacing an avatar is a good example of idempotence because doing that twice would yield the same result Finally if your job workload scales up and down over time (for examp le perhaps more avatars are uploaded when more users are online) consider using Auto Scaling to Launch Spot Instances Amazon SQS offers a number of metr ics that you can automatically scale on the best being ApproximateNumberOfMessagesVisible The number of visible messages is basically your queue backlog For example depending on how many jobs you can process each minute you could scale up if this reaches 100 and then scale back down when it falls below 10 For more information about Amazon SQS and Amazon SNS metrics see Amazon SNS Metric s and Dimensions and Amazon SQS Metrics and Dimensions in the Amazon CloudWatch User Guide FIFO queues Although the recommended method of using Amazon SQS is to engineer and architect for your application to be resilient to duplication and misordering yo u may have certain tasks where the ordering of messages is absolutely critical to proper functioning and duplicates can’t be tolerated For example micro transactions where a user wants to buy a particular item once and only once and this action must be strictly regulated To supplement this requirement First InFirstOut (FIFO) queues are available in select AWS Regions FIFO queues provide the ability to process messages both in order and exactly once There are additional limitations when working wi th FIFO queues due to the emphasis on message order and delivery For more details about FIFO queues see FIFO (First InFirstOut) Queues in the Amazon SQS Developer Guide Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 47 Other queue options In addition to Amazon SQS and Amazon SNS there are dozens of other approaches to message queues that can run effectively on Amazon EC2 such as RabbitMQ ActiveMQ and Redis With all of these you are responsible for launching a set of EC2 instances and configuring them yourself which is outside the scope of this book Keep in mind that running a reliable queue is much like running a highly available database: you need to consider high throughp ut disk (such as Amazon EBS PIOPS) snapshots redundancy replication failover and so forth Ensuring the uptime and durability of a custom queue solution can be a time consuming task and can fail at the worst times like during your highest load peaks Cost of the cloud With AWS you no longer need to dedicate valuable resources to building costly infrastructure including purchasing servers and software licenses or leasing facilities With AWS you can replace large upfront expenses with lower variable costs and pay only for what you use and for as long as you need it All AWS services are available on demand and don’t require long term contracts or complex licensing dependencies Some of the advantages of AWS include the following: • OnDemand Instances – AWS offers a pay asyougo approach for over 70 cloud services enabling game developers to deploy both quickly and cheaply as their game gains users Like the utilities that provide power or water you pay only what you consume and once you stop using them there are no additional costs • Reserved Instances – Some AWS services like Amazon EC2 allow you to enter into a 1 or 3 year agreement in order to gain additional savings on the on demand cost of these services With Amazon EC2 in particular you can choose to pay either no upfront cost for an exchange in reduced hourly cost or pay all upfront for additional savings over the year (no hourly costs) • Spot Instances – Amazon EC2 Spot Instances enable you to bid on spare Amazon EC2 capacity as a method of significantly reducing your computing spend Spot Instances are great for applications that are tolerant to workload interruptions; some use cases include batch processing and analytics pipelines that aren’t critical to your primary game functioning Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 48 • Savings Plans – Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices than On Demand in exchange for a commitment to use a specific amount of computer power for a one or three year period • Serverless model – Some other services like AWS Lambda are more granular in their approach to pricing Instead of being pay bythehour they are billed in either very small units of time li ke milliseconds or by request count instead of time This allows you to truly pay for only what you use instead of leaving a service up but idle and accruing costs Conclusion and next steps We've covered a lot of ground in this book Let's revisit the major takeaways and some simple steps you can take to begin your game’s journey on AWS: • Start simple with two EC2 instances behind an Elastic Load Balancing load balancer Choose either Amazon RDS or Amazon DynamoDB as your database Consider using AWS Elastic Beanstalk to manage this backend stack • Store binary content such as game data assets and patches on Amazon S3 Using Amazon S3 offloads network intensive downloads from your game servers Consider CloudFront if you’re distributing these assets glo bally • Always deploy your EC2 instances and databases to multiple Availability Zones for best availability This is as easy as splitting your instances across two Availability Zones to begin with • Add caching via ElastiCache as your server load grows Crea te at least one ElastiCache node in each Availability Zone where you have application servers • As the load grow s offload time intensive operations to background tasks by using Amazon SQS or another queue such as RabbitMQ This enables your EC2 app instanc es and database to handle a higher number of concurrent players • If database performance becomes an issue add read replicas to spread the read/write load out Evaluate whether a NoSQL store such as DynamoDB or Redis could be added to handle certain databa se tasks • At extreme loads advanced strategies such as event driven servers or sharded databases may be necessary However wait to implement these un til necessary since they add complexity to development deployment and debugging Amazon Web Services Introducti on to Scalable Game Development Patterns on AWS 49 Finally remember tha t Amazon Web Services has a team of business and technical people dedicated to supporting our gaming customers To contact us fill out the form at the AWS Game Tech website Contributors Contributors to thi s document include: • Greg McConnel Sr Manager AWS Security and Identity Compliance • Keith Lafaso Sr Technical Account Manager AWS Enterprise Support • Chris Blackwell Sr Software Development Engineer AWS Marketing Further reading For additional inform ation see: • AWS Game Tech website • AWS Marketplace • AWS Support • AWS Architecture Center • AWS Whitepapers & Guides • AWS Documentation Blog Posts and Article s • Best Practices in Evaluating Elastic Load Balancing • Top 10 Performance Tuning Tips for Amazon Athena • Best Practices for Amazon EMR • Fitting the Pattern: Serverless Custom Matchmaking with Amazon GameLift • Performance at Scale with Amazon ElastiCache • MongoDB on AWS: Guidelines and Best Practices Amazon Web Services Introduction to Scalable Game Development Patterns on AWS 50 Document revisions Date Description December 2019 First publication March 11 2021 Reviewed for technical accuracy
|
General
|
consultant
|
Best Practices
|
ITIL_Asset_and_Configuration_Management_in_the_Cloud
|
ArchivedITIL Asset and Configuration Management in the Cloud January 2017 This paper has been archivedThis paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What Is ITIL? 1 AWS Cloud Adoption Framework 2 Asset and Configuration Management in the Cloud 3 Asset and Configuration Management and AWS CAF 5 Impact on Financial Management 5 Creating a Configuration Management Database 6 Managing the Configuration Lifecycle in the Cloud 8 Conclusion 9 Contributors 10 ArchivedAbstract Cloud initiatives require more than just the right technology They also must be supported by organizational changes such as people and process changes This paper is intended for IT service management (ITSM) professionals who are supporting a hybrid cloud environment that leverag es AWS It outlines best practices for asset and configuration management a key area in the IT Infrastructure Library ( ITIL) on the AWS cloud platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 1 Introduction Leveraging the experiences of enterprise customers who have successfully integrated their cloud strategy with their IT Infrastructure Library (ITIL)based service management practices this paper will cover: Asset and Configuration Management in ITIL AWS Cloud Adoption Framework (AWS CAF) Cloudspecific Asset and Configuration Management best practices like creating a configuration management database What Is ITIL? The framework managed by AXELOS Limited defines a commonly used best practice approach to IT service management (ITSM) Although it builds on ISO/IEC 20000 which provides a “formal and universal standard for organizations seeking to have their ITSM capabilities audited and certified ”1 ITIL goes one step further to propose operational processes required to deliver the standard ITIL is composed of five volumes that describe the ITSM lifecycle as defined by AXELOS: Service Strategy Understands organizational objectives and customer needs Service Design Turns the service strategy into a plan for delivering the business objectives Service Transition Develops and improves capabilities for introducing new services into supported environments Service Operation Manages services in supported environments Continual Service Improvement Achieves incremental and large scale improvements to services Each volume addresses the capabilities that enterprises must have in place Asset and Configuration Management is one of the chapters in the Service Transition volume For more information see the Axelos website 2 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 2 AWS Cloud Adoption Framework AWS CAF is used to help enterprises modernize ITSM practices so that they can take advantage of the agility security and cost benefits afforded by public or hybrid clouds ITIL and AWS CAF are compatible Like ITIL AWS CAF organizes and describes all of the activities and processes involved in planning creating managing and supporting modern IT services It offers practical guidance and comprehensive guidelines for establishing developing and running cloud based IT capabilities AWS CAF is built on seven perspectives: People Selecting and training IT personnel with appropriate skills defining and empowering delivery teams with accountabilities and service level agreements Process Managing programs and projects to be on time on target and within budget while keepi ng risks at acceptable levels Security Applying a comprehensive and rigorous method for describing the structure and behavior for an organization’s security processes systems and personnel Business Identifying analyzing and measuring the effectiveness of IT investments Maturity Analyzing defining and anticipating demand for and acceptance of plan ned IT capabilities and services Platform Defining and describing core architectural principles standards and patterns that are required for optimal IT capabilities and services Operations Transitioning operating and optimizing the hybrid IT environment enabling efficient and automated IT service management AWS CAF is an important supplement to enterprise ITSM frameworks used today because it provides enterprises with practical operational advice for implementing and operating ITSM in a cloudbased IT infrastructure For more information see AWS Cloud Adoption Framework 3 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 3 Asset and Configuration Management in the Cloud In practice asset and configuration management aligns very closely to other ITIL processes such as incident management change management problem management or servicelevel management ITIL defines an asset as “any resource or capability that could contribute to the delivery of a service” Examples of assets include: virtual or physical storage virtual or physical servers a software license undocumented information known to internal team members ITIL defines configuration items as “an asset that needs to be managed in order to deliver an IT service” All configuration items are assets but many assets are not configuration items Examples of configuration items include a virtual or physical server or a software license Every configuration item should be under the control of change management The goals of asset and configuration management are to: Support ITIL processes by providing accurate configuration information to assist decision making (for example the authorization of changes the planning of releases) and to help resolve incidents and problems faster Minimize the number of quality and compliance issues caused by incorrect or inaccurate configuration of services and assets Define and control the components of services and infrastructure and maintain accurate configuration information on the historical planned and current state of the services and infrastructure The value to business is: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 4 Optimization of the performance of assets improves the performance of the service overall For example i t mitigates risks caused by service outages and failed licensing audits Asset and configuration management provides an accurate representation of a service release or environment which enables: o Better planning of changes and releases o Improved incident and problem resolution o Meeting service levels and warranties o Better adherence to standards and legal and regulatory obligations (fewer nonconformances) o Traceable changes o The ability to identify the costs for a service The following diagram from AXELOS shows there are elements in asset and configuration management that directly relate to elements in change management Asset and configuration management underpins change management Without it the business is subject to increased risk and uncertainty Figure 1: Asset and configuration management in ITIL ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 5 Asset and Configuration Management and AWS CAF As with most specifications covered in the Service Transition volume of ITIL asset and configuration management falls into the Cloud Service Management function of the AWS CAF Operations perspective People and process changes should be supported by a cloud governance forum or Center of Excellence whose role is to use AWS CAF to manage through the transition From the perspective of ITSM your operations should certainly have a seat at the table As shown in Figure 2 AWS CAF accounts for the management of assets and configuration items in a hybrid environment Information can come from the onpremises environment or any number of cloud providers (private or public) Figure 2: AWS CAF integration Impact on Financial Management One of the most important aspects of asset management is to ensure data is available for these financial management processes: Capitalization and depreciation Software license management ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 6 Compliance requirements These activities typically require comprehensive asset lifecycle management processes which take significant cost and effort One of the benefits of moving IT to the cloud is that the financial nature of the transaction moves from a capital expenditure (CAPEX ) to an operating expenditure (OPEX ) You can do away with the large capital outlays (for example a server refresh) that require months of planning as well as amortization and depreciation Creating a Configuration Management Database A configuration management database (CMDB) i s used by IT to track and manage its resources The CMDB presents a logical model of the enterprise infrastructure to give IT more control over the environment and facilitate decisionmaking At a minimum a CMDB contains the following: Configuration item (CI) records with all associated attributes captured A relationship model between different CIs A history of all service impacts in the form of incidents changes and problems In a traditional IT setup the goals of establishing a CMDB are met through the process of: Discovery tools used to create a record of existing CIs Comprehensive change management processes to keep track of creation and updates to CIs Integration of incident and problem management data with impacted CIs with ITSM workflow tools like BMC HewlettPackard or ServiceNow These processes and tools in turn help organizations better understand the IT environment by providing insight into not only the impact of incidents problems and changes but also financial resources service availability and capacity managemen t There are some challenges to creating a CMDB for cloud resources due to: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 7 The inherent dynamic nature of cloud resource provisioning where resources can be created or terminated through predefined business policies or application architecture elements like auto scaling The difficulty of capturing cloud resources data in a format that can be imported and maintain ed in a single system of record for all enterprise CIs A prevalence of shadow IT organizations that makes information sharing and even manual consolidation of enterprise IT assets and CIs difficult Configuration Management Inventory for Cloud Resources There are two logical approaches AWS customers can take to create a CMDB for cloud resources: Figure 3: Options for Enterprise CMDB Systems AWS Config helps customers manage their CIs i n the cloud AWS Config provides a detailed view of the configuration of AWS resources in an AWS account With AWS Config customers can do the following: Get a snapshot of all the supported resources associated with an AWS account at any point in time Retrieve the configurations of the resources Retrieve historical configurations of the resources ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 8 Receive a notification whenever a resource is created modified or deleted View relationships between resources This information is important to any IT organization for CI discovery and recording change tracking audit and compliance and security incident analysis Customers can access this information from the AWS Config console or programmatically extract it into their CMDBs As an example of the potential for integration with legacy systems ServiceNow the platform asaservice (PaaS) provider of enterprise service management software is now integrated with AWS Config This means ServiceNow users can leverage Option 1 shown in Figure 3 Managing the Configuration Lifecycle in the Cloud One of the goals of service asset and configuration management is to manage the CI lifecycle and track and record all changes One of the key aspects of the cloud is a much tighter integration of the software and infrastructure configuration lifecycles This section covers aspects of configuration lifecycle management across instance stacks and applications: Instance creation templates : Every IT organization has security and compliance standards for instances introduced into its IT environments Amazon Machine Images (AMIs) are a robust way of standardizing instance creation Users can opt for AWS or thirdparty provided predefined AMIs or define custom AMIs If you create AMI templates for instance provisioning you can define instance configuration and environmental addins in a predefined and programmatic manner A typical custom AMI might prescribe the base OS version and associated security monitoring and configuration management agents Instance lifecycle management : For every instance or resource created in an IT environment there are multiple lifecycle management activities that must be performed Some of the standard tasks are patch management hardening policies version upgrades environment variable changes and so on These activities can be performed manually but most IT organizations use robust configuration management tools like Chef Puppet and System Center Configuration Manager to perform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 9 these tasks AWS allows easy integration with these tools to ensure a consistent enterprise configuration management approach Environment provisioning templates : AWS CloudFormation is useful for provisioning end toend environments (also referred to as stacks ) in a consistent and repeatable fashion without actually provisioning each component individually You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work AWS CloudFormation takes care of this for you You can use a template to create identical copies of the same stack without effort or errors Templates are simple JSONformatted text files that can be held securely leveraging your current source control mechanisms Application configuration and lifecycle management : In today’s world of agile development development teams leverage continuous integration and continuous delivery best practices AWS provides seamless integration with tools like Jenkins (CI) and Github for code management and deployment Services like AWS CodePipeline AWS CodeDeploy and AWS CodeCommit can be used to manage the application lifecycle Conclusion Service asset and configuration management processes consist of critical activities for the provisioning and maintenance of the health of IT systems Consistent management of configuration items through their lifecycle leads to efficient and effective system health and performance AWS enables best practices across every level of resource in an application stack With the tools automations and integration available on the AWS platform IT organizations can achieve significant productivity gains Successful implementation an d execution of service asset and configuration management processes should be seen as a shared responsibility that can be achieved through the right commitment by IT organizations enabled by the AWS platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 10 Contributors The following individuals contributed to this document: Darren Thayre Transformation Consultant AWS Professional Services Anindo Sengupta Chief Operating Officer Minjar Cloud Solutions 1 ITIL Service Operation Publication AXELOS 2007 page 5 2 https://wwwaxeloscom/bestpracticesolutions/itil/what isitil 3 http://awsamazoncom/professionalservices/CAF/ Notes
|
General
|
consultant
|
Best Practices
|
ITIL_Event_Management_in_the_Cloud_An_AWS_Cloud_Adoption_Framework_Addendum
|
Archived ITIL Event Management in the Cloud An AWS Cl oud Adoption Frame work Addendum January 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What is ITIL? 1 What is the AWS Cloud Adoption Framework? 2 Event Management in ITIL 3 Event Management and the CAF 5 CloudSpecific Event Management Best Practices for IT Service Managers 5 Cloud Event Monitoring Detection and Communication Using Amazon CloudWatch 6 Conclusion 11 Contributors 11 ArchivedAbstract Many enterprises have successfully migrated some of their onpremises IT workloads to the cloud An enterprise must also deploy an IT Service Management (ITSM) framework so it can efficiently and effectively operate those IT capabilities This whitepaper outlines best practices for event management in a hybrid cloud environment using Amazon Web Services (AWS) ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 1 Introduction This whitepaper is for IT Service Management (ITSM) professionals who support a hybrid cloud environment that uses AWS The focus is on Event Management a core chapter of the Service Operations volume of the IT Infrastructure Library (ITIL) Many AWS enterprise customers have successfully integrated their cloud strategy with their ITILbased IT service management practices This whitepaper provides you with background in the following areas: Event Management in ITIL The AWS Cloud Adoption Framework CloudSpecific Event Management Best Practices What is ITIL? The IT Infrastructure Library (ITIL) Framework managed by AXELOS Limited defines a commonly used bestpractice approach to IT Service Management (ITSM) It builds on ISO/IEC 20000 which provides a “formal and universal standard for organizations seeking to have their ITSM capabilities audited and certified”1 However the ITIL Framework goes one step further to propose operational processes required to deliver the standard ITIL is composed of five volumes that describe the entire ITSM lifecycle as defined by the AXELOS To explore these volumes in detail go to https://wwwaxeloscom/ The following table gives you a brief synopsis of each of the five volumes: ITIL Volume Description Service Strategy Describes how to design develop and implement service management as a strategic asset Service Design Describes how to design and develop services and service management processes Service Transition Describes the development and improvement of capabilities for transitioning new and changed services into operations Service Operation Embodies practices in the management of service operation Continual Service Improvement Guidance in creating and maintaining value for customers ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 2 What is the AWS Cloud Adoption Framework? The Cloud Adoption Framework (CAF) offers comprehensive guidelines for establishing developing and running cloudbased IT capabilities AWS uses the CAF to help enterprises modernize their ITSM practices so that they can take advantage of the agility security and cost benefits afforded by the cloud Like ITIL the CAF organizes and describes the activities and processes involved in planning creating managing and supporting a modern IT service ITIL and the CAF are compatible In fact the CAF provides enterprises with practical operational advice for how to implement and operate ITSM in a cloudbased IT infrastructure The details of the AWS CAF are beyond the scope of this whitepaper but if you want to learn more you can read the CAF whitepaper at http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf The CAF examines IT management in the cloud from seven core perspectives as shown in the following table: CAF Perspective Description People Selecting and training IT personnel with appropriate skills defining and empowering delivery teams with accountabilities and service level agreements Process Managing programs and projects to be on time on target and within budget while keeping risks at accepta ble levels Security Applying a comprehensive and rigorous method of describing a structure and behavior for an organization’s security processes systems and personnel Strategy & Value Identifying analyzing and measuring the effectiveness of IT investm ents that generate the most optimal business value Maturity Analyzing defining and anticipating demand for and acceptance of envisioned IT capabilities and services Platform Defining and describing core architectural principles standards and patterns that are required for optimal IT capabilities and services Operation Transitioning operating and optimizing the hybrid IT environment enabling efficient and automated IT service management ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 3 Event Management in ITIL The ITIL specification defines an event as “any detectable or discernable occurrence that has significance for the management of the IT infrastructure or the delivery of IT service” In other words an event is something that happens to an IT system that has business impact An occurrence can be anything that has material impact on the business such as environmental conditions security intrusions warnings errors triggers or even normal functioning Occurrences are things that an enterprise needs to monitor preferably in an automated fashion giving you the visibility you need to run your systems more efficiently and effectively over time with minimal downtime The goal of Event Management is to detect events prioritize and categorize them and figure out what to do about them In practice Event Management is used with a central monitoring tool which registers events from services or other tools such as configuration tools availability and capacity management tools or specialized monitoring tools Event Management acts as an umbrella function that sits on top of other ITIL processes such as Incident Management Change Management Problem Management or ServiceLevel Management and divides the work depending on the type of event or its severity AXEL OS provides the following flow chart to describe what an enterprise’s Event Management process should look like: ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 4 Figure 1: Event management in ITIL AXELOS observes that not all events are or need to be detected or registered Defining the events to be managed is an explicit and important management decision After management decides which events are relevant service components must be able to publish the events or the events must be pollable by a monitoring tool Events must also be actionable The Event Management process whether automated or manual must be able to determine what to do for any event This determination can take many forms such as ignoring logging or escalating the event Finally the Event Management process must be able to review and eventually close events ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 5 Event Management and the CAF As with most specifications covered in the Service Operation Volume of ITIL Event Management falls nicely into the Cloud Service Management function of the AWS CAF Operating Domain Of course cloud initiatives require more than just the right technology They also must be supported by organizational changes including people and process changes Such changes should be supported by a Cloud Governance Forum or Center of Excellence that has the role of managing through transition using the CAF From the perspective of ITSM your operations should certainly have a seat at the table Figure 2 illustrates how the CAF looks at managing events and actions in a hybrid environment Review and action is based on information comes from the on premises environment or any number of cloud providers (private or public) Figure 2: CAF integration CloudSpecific Event Management Best Practices for IT Service Managers AWS provides the building blocks for your enterprise to create your own Event Management Infrastructure These building blocks allow for the integration of cloud services with onpremises or more traditional environments In particular ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 6 AWS provides full support for ITIL Section 4110: Designing for Event Management AWS does not provide Event Management as a Service Enterprises that enable Event Management would need to deploy and manage their own Event Management infrastructure Cloud Event Monitoring Detection and Communication Using Amazon CloudWatch AWS supports instrumentation by providing tools to publish and poll events In particular you can use the Amazon CloudWatch API for automated management and integration into your Event Management infrastructure Amazon CloudWatch monitors your AWS resources and the applications that you run on AWS in realtime2 You can use Amazon CloudWatch to collect and track metrics which are the variables you want to measure for your resources and applications In addition Amazon CloudWatch alarms (or monitoring scripts) can send notifications or automatically make changes to the resources that you are monitoring based on rules that you define For information on CloudWatch pricing go to the Amazon CloudWatch pricing page 3 You can use CloudWatch to monitor the CPU usage and disk reads and writes of your Amazon Elastic Compute Cloud (Amazon EC2) instances Then you can use this data to determine whether you should launch additional instances to handle increased load You can also use this data to stop underused instances and save money In addition to monitoring the builtin metrics that come with AWS you can monitor your own custom metrics You can publish and monitor metrics that you derive from your applications to reflect your business needs With Amazon CloudWatch you gain systemwide visibility into resource utilization application performance and operational health4 Amazon EC2 Monitoring Detail Read more about Amazon EC2 monitoring in the AWS documentation: http://docsawsamazonco m/AWSEC2/latest/UserGui de/monitoring_ec2html ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 7 By default metrics and calculated statistics are presented graphically in the Amazon CloudWatch console You can also retrieve these metrics using the API or command line tools When you use Auto Scaling you can configure alarm actions to stop start or terminate an Amazon EC2 instance when certain criteria are met In addition you can create alarms that initiate Auto Scaling and Amazon Simple Notification Service (Amazon SNS) actions on your behalf5 An enterprise that does not have its own event management infrastructure can implement basic ITIL Event Management using Amazon CloudWatch However most large enterprises especially those running hybrid cloud designs will maintain their own event management infrastructure using products such as BMC Remedy Microsoft System Center or HP Open View Many event management tools are integrated with Amazon Web Services See the following table for some examples Tool Reference MS System Center http://awsamazoncom/windows/system center/ BMC Remedy http://mediacmsbmccom/documents/439126_BMC_Managing_AWS_SWP pdf IBM Tivoli https://awsamazoncom /marketplace/pp/B007P7MEK0 CA APM https://awsamazoncom/marketplace/pp/B00GGX0N0W/ref=portal_asin_url Tool Reference CA Nimsoft http://wwwcacom/~/media/Files/DataSheets/ca nimsoft monitor for amazon webservicespdf HP Sitescope http://h304 99www3hpcom/t5/Business Service Management BAC/HP SiteScope integration withAmazon CloudWatch AutoScaling AWS/ba p/2408860#VCzWTPmSzTY This type of design is fully compatible with AWS However enterprises will need to deploy SNMP AWS SNS or other interfaces that sit between Amazon CloudWatch and their enterprise Event Management / Service Desk tool This ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 8 will ensure that AWSgenerated events can pass through Amazon CloudWatch and into the enterprise Event Manager IT service management professionals who integrate Amazon CloudWatch into their enterprise event management infrastructure need to answer the following questions: Are the right events are bei ng propagated? Are the events tracked at the right level of granularity? Is there a mechanism to review and update triggers limits and event handling rules? Best Practices for Monitoring in AWS Make monitoring a priority to head off small problems before they become big ones Automate monitoring tasks as much as possible Check the log files on your services (Amazon EC2 Amazon S3 Amazon RDS etc) Create and implement a monitoring plan that collects data from all parts of your AWS solution so that you can more easily debug a multipoint failure if one occurs Your monitoring plan should address at a minimum the following questions: What are your monitoring goals? What resources will you monitor? How often will you will monitor these resources? What monitoring tools will you use? Who will perform the monitoring tasks? Who should receive notification when something goes wrong? ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 9 Incident Management Events classified as Warnings or Exceptions may trigger incident management processes These processes restore normal service operation as quickly as possible and minimize any adverse impact on business operations In the ITIL process first attempt to resolve warnings or exceptions by consulting a database of known errors or a configuration management database (CMDB) If the warning or exception is not in the database then classify the incident and transfer it to Incident Management Incident Management typically consists of first line support specialists who can resolve most of the common incidents When they cannot resolve an incident they escalate it to the second line support team and the process continues until the incident is resolved Incident Management tries to find a quick resolution to the Incident so that the service degradation or downtime i s minimized”1 Figure 3: Incident management in ITIL It is worth noting that a welldesigned cloud infrastructure can be far more resilient to faults There is less likelihood of generating production incidents where faults are able to gracefully fail over Underlying problems can be resolved through Problem Management ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 10 Incident Management Best Practices As part of cloudintegrated Incident Management enterprises should define several parameters: Ensure that relevant employees and staff understand which services are AWSoperated versus enterpriseoperated (for example an Amazon EC2 instance versus a business application running on that instance) Ensure that relevant staff and processes are aware of the SLAs associated with AWSoperated services and integrate those SLAs into the existing Enterprise Incident Management infrastructure Define explicit SLAs (including resolution time scales) for services operated by the enterprise but running on the AWS infrastructure Define Incident Severity levels and Priorities for all services running on the AWS infrastructure Subscribe to Enterprise Support and agree on the role the Amazon Technical Account Manager (TAM) will have during Incident Responses For example for Severity 1 incidents should the TAM be part of the emergency resolution bridge / emergency response team? Ensure 360 degree ticket integration Make sure that ticket opening and closing is seamless across onpremises and cloud systems Define recovery runbook recipes (Incident Model) that include the recovery steps in chronological order individual responsibilities escalation rules timescales and SLA thresholds media/communications roles and post mortems You should note that in a cloud environment where infrastructure is defined as code termination and reboot might be a faster way to recover from an incident than by using standard debugging approaches Service can be immediately restored and root problems can be addressed offline as part of Problem Management Where possible incident remediation should occur automatically with no human intervention However where human intervention is required that intervention should be simple with mostly automated runbook steps Problem Management Problem Management is the process of managing the lifecycle of all problems with the goal of preventing repeat incidents Whereas the goal of Incident Management is to recover Problem Management is about resolving root causes ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 11 so that incidents do not recur and maintaining information about problems and related solutions so organizations can reduce the impact of incidents Enterprises operating a hybrid environment will likely have their own Problem Management infrastructure The goal of integration should be to seamlessly integrate the process for addressing problems related to AWS into the existing Problem Management infrastructure Enterprises have the option of purchasing AWS Enterprise Support where they can agree on role the Amazon Technical Account Manager (TAM) will have during Problem Management For example where the problem explicitly involves part of the AWS infrastructure the TAM might be involved with formal problem detection prioritization and diagnosis workshops and discussions or be required to log AWSrelated problems with the enterprise Problem Logging platform / Known Error Database If AWS infrastructure is not part of the root cause it could play a role in supporting diagnosis Here the TAM can support the information gathering Conclusion Enterprises that migrate to the cloud can feel confident that their existing investments in ITIL and particularly Event Management can be leveraged going forward The Cloud Operating model is consistent with traditional IT Service Management discipline This whitepaper gives you a proposed suite of best practices to help smooth the transition and ensure continuing compliance Contributors The following individual contributed to this document: Eric Tachibana AWS Professional Services 1 ITIL Service Operation Publication Office of Government Commerce 2007 Page 5 2 For up to 2 weeks! Notes ArchivedAmazon Web Services – ITIL Event Management in the Cloud Page 12 3 http://awsamazoncom/cloudwatch/pricing/ 4 What Is Amazon CloudWatch? (http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/W hatIsCloudWatchhtml ) 5 For more information about creating CloudWatch alarms see Creating Amazon CloudWatch Alarms in the CloudWatch documentation (http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/ AlarmThatSendsEmailhtml )
|
General
|
consultant
|
Best Practices
|
Lambda_Architecture_for_Batch_and_RealTime_Processing_on_AWS_with_Spark_Streaming_and_Spark_SQL
|
Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived
|
General
|
consultant
|
Best Practices
|
Lambda_Architecture_for_Batch_and_Stream_Processing
|
Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived
|
General
|
consultant
|
Best Practices
|
Leveraging_Amazon_Chime_Voice_Connector_for_SIP_Trunking
|
Leveraging Amazon Chime Voice Connector for SIP Trunking April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 About Amazon Chime Voice Connector 1 Service Benefits 1 Low Cost and Reduc ed TCO 1 Flexible and On Demand 2 Use Case Scenarios 3 Outbound Calling Only 4 Inbound and Outbound Calling 5 Inbound and Outbound Calling Exclusively 6 Inbound Calling Only 7 Service Features 8 Reliability and Elasticity 8 AWS SDK 8 Security – Call Encryption 8 IP Whitelisting and Call Authentication 8 Call Detail Records (CDR) 8 Phone Number Inventory Management 9 Outbound Caller ID Name 9 Call Routing with Load Sharing 9 Failover and Load Sharing 10 Fax 11 Access 11 Real time Audio Streaming to Amazon Kinesis Video Streams 12 Monitoring Amazon Chime Voice Connectors 13 Conclusion 14 Contributors 14 Further Reading 14 Document Revisions 15 Appendix A: Call Detail Record (CDR) Specifications 16 Call Detail Record (CDR) 16 Streaming Detail Record (SDR) 18 Appendix B: SIP Signaling Specifications 21 Ports and Protocols 21 Supported SIP Methods 21 Unsupported SIP Methods 21 Required SIP Headers 21 SIP OPTIONS Requirements 22 SIPREC INVITE Requirements 22 Dialed Number Requirements 22 Caller ID Number Requirements 23 Caller ID Name 23 Digest Authentication 23 Call Encryption 23 Session Description Protocol (SDP) 24 Supported Codecs 24 DTMF 24 Appendix C: CloudWatch Metrics and Logs Examples 25 CloudWatch Metrics 25 CloudWatch Logs 25 Abstract This whitepaper outlines the features and benefits of using Amazon Chime Voice Connector Amazon Chime Voice Connector is a service that carries your voice traffic over the internet and elastically scales to meet your capacity needs This whitepaper assumes that you are already familiar with Session Initiation Protocol (SIP) trunkingAmazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 1 Introduction Amazon Chime Voice Connector is a payasyougo service that enables companies to make and receive secure inexpensive phone calls over the internet using their on premises telephone system such as a private branch exchange (PBX) The service has no upfront fees elastically scales based on demand and supports calling both landline and mobile phone numbers in over 100 countries Getting started with Amazon Chime Voice Connector is as easy as a few clicks on the AWS Management Console and then employees can place and receive calls on their desk phones in minutes About Amazon Chime Voice Connector Amazon Chime Voice Connector uses standards based Session Initiation Protocol (SIP) and c alls are delivered over the internet using Voice over Internet Protocol (VoIP) Amazon Chime Voice Connector does not require dedicated data circuits and can use a company’s existing internet connection or use AWS Direct Connect public virtual interface for the SIP connection to AWS The configuration of SIP trunks can be performed in minutes using the AWS Managemen t Console or the AWS SDK Amazon Chime Voice Connector offers costeffective rates for outbound calls In addition c alls to Amazon Chime audio conferences as well as calls to other companies using Amazon Chime Voice Connector are at no additional cost With this service companies can reduce their voice calling costs without having to replace their on premises phone system Service Benefits Amazon Chime Voice Conne ctor provides the following benefits Low Cost and Reduced TCO Amazon Chime Voice Connector provides an easy way to move telephony to the cloud without replacing on premises phone system s Using the service you can reduce your voice calling costs by up to 50% by eliminating fixed telephone network costs and simplifying your voice network administration To estimate the cost of using Amazon Chime Voice Connector see the Amazon Chime Pricing page Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 2 Amazon Chime Voice Connector allows you to use SIP trunking infrastructure on demand with voice encryption available at no extra charge The elastic scaling of the service eliminates the need to overprovision SIP and/or time division multiplexing (TDM) trunks fo r peak capacity You only pay for what you use and can track your telecom spending in your monthly AWS invoice There is no charge for c reating SIP trunks and no subscription or per user license fees or c oncurrent conversation fees The following table sho ws a cost comparison of Amazon Chime Voice Connector with other service offerings Table 1: Cost Comparison of Amazon Chime Voice Connector and Other SIP Offerings Monthly Cost Offering 1 Offering 2 Offering 3 Amazon Inbound call/minute $00000 $00000 $00045 $00022 Outbound call/minute $00080 $00120 $00070 $00049 Concurrent call charge per sub $08180 $1090 7 $0 $0 Number rental $010 $100 $100 $100 350 minutes/month $187 $280 $216 $140 Normalized Pricing/month $278 $489 $316 $240 Potential savings with Amazon Chime Voice Connector 1467% 6831% 2734% N/A Flexible and On Demand Your telecom administrator uses the AWS Management Console to create the Amazon Chime Voice Connector and your organization can begin sending and receiving voice calls in minutes You can route as much voice traffic to it as needed or desired within the AWS service quotas You can also choose to keep your inbound phone numbers also known as Direct Inward Dialing (DID) numbers with your current service provider or contact AWS Support to port the number s to Amazon Chime Voice Connector and take advantage of the Amazon Chime dial in rates Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 3 Use Case Scenarios You can use Amazon Chime Voice Connector to send voice traffic from your on premises PBX to AWS (outbound calls to public switched telephone network [PSTN ] numbers ) and to receive voice calls from your Voice Connector to your PBX ( inbound calls from DID numbers ) or both In both call flow scenarios (outbound and /or inbound calls ) you can connect to Amazon Chime Voice Connector using your existing telephony devices These device s can be a Session Border Controller (SBC) an IP PBX or a media gateway In the following examples an SBC is the network element that is used to connect the SIP trunks • Outbound Calling Only • Inbound and Outbound Calling • Inbound and Outbound Calling Exclusively • Inbound Calling Only Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 4 Outbound Calling Only In this deployment model you benefit from the lowcost outbound calling to PSTN phone number s Calls from your PBX to Amazon Chime Voice Connector incur no outbound telephony charges You can use Amazon Chime Voice Connector for outbound calling in conjunction with the existing connection to your current SIP trunking provider Your inbound calling remains unchanged In this use case Amazon Chime Voice Connector is typically configured as a route for high availability in case the default route to the Existing SIP Trunking Provider is unavailable as well as for least cost routing ( LCR) within the IP PBX or SBC Figure 1: Outbound Calling Onl y Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 5 Inbound and Outbound Calling In this deployment model you use Amazon Chime Voice Connector for both inbound and outbound voice calling in parallel with your current service provider For inbound calling you either acquire new phone numbers from AWS or port your existing phone numbers from your current service provider You can move some or all of the phone numbers from your current service provider to Amazon Chime Voice Connector For outbound calling you use Amazon Chime Voice Connector as a parallel route for your outbound voice calls from your PBX Figure 2: Inbound and Outbound Calling Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 6 Inbound and Outbound Calling Exclusively In this deployment model you use Amazon Chime Voice Connector for both inbound and outbound voice calling This eliminate s the need for your existing SIP trunks and reduces network complexity For inbound calling you acquire new phone numbers from AWS or port the existing phone numbers from your current service provider For outbound calling use Amazon Chime Voice Connector as the singl e route for all outbound voice calls from your PBX Amazon Chime Voice Connector has built in call failover service resilience and high availability features Figure 3: Inbound and Outbound Calling Exclusively Amazon Web Services Leveraging Am azon Chime Voice Connector for SIP Trunking Page 7 Inbound Calling Only In this deployment model you use Amazon Chime Voice Connector only for inbound voice calling For inbound calling only you acquire new phone numbers from AWS or port existing phone numbers from your current service provider For inbound calling only you benefit from the routing features provided by Amazon Chime Voice Connector such as load balancing failure mitigation mechanisms and easy phone number inventory management using the AWS Management Console or the AWS SDK For more information on these features see Call Routing with Load Sharing and Phone Number Inventory Management Figure 4: Inbound Calling Only Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 8 Service Features Reliability and Elasticity Amazon Chime Voice Connector delivers highly available and scalable telephone service for inbound calls to your onpremises telephone system outbound calls to Amazon Chime Voice Con nector or both Using Amazon Chime Voice Connector Groups you can configure multi region failover for inbound calls from PSTN c alls to your Amazon Chime Voice Connectors Additionally Amazon Chime Voice Connector provides a loadsharing mechanism for inbound calls t o your on premises phone system using priority and weight AWS SDK The AWS SDK allows you to perform and automate key administrative tasks such as managing phone numbers Amazon Chime Voice Connectors and Amazon Chime Voice Connector Groups Security – Call Encryption Call e ncryption is a configurable option for each Amazon Chime Voice Conne ctor and is provided at no additional charge If encryption is enabled voice calls are encrypted between the service and your SIP infrastructure Transport Layer Security ( TLS) is used to encrypt the SIP signaling and Secure Real Time Protocol (SRTP) is used to encrypt the media streams To learn about the SIP Signaling Specifications see Appendix B: SIP Signaling Specifications IP Whitelisting and Call Authentication You can authenticate v oice traffic to Amazon Chime Voice Connector by using the mandatory Allow List (IP whitelisting) and by using the optional Digest Authentication (as described in RFC 3261 section 22 ) Call Detail Records (CDR) Shortly after each call A mazon Chime Voice Connector stores the Call Detail Record (CDR) as an object in your own Amazon Simple Storage Service ( Amazon S3 ) bucket You configure the S3 bucket in the AWS Management Consol e You can retrieve t he CDR records from Amazon S3 and import them into a VoIP billing system To learn Amazon Web Services Leveraging Amazon Chime Voice Connector for S IP Trunking Page 9 about the CDR schema see Appendix A: Call Detail Record (CDR) Specifications For the current CDR format see the Amazon Chime Voice Connector documentation Phone Number Inventory Management You can manage p hone n umber s using the AWS Management C onsol e and the AWS SDK You can manage your existing phone numbe r inventory order new numbers review pending transactions and manage deleted phone number s Contact AWS Support to port existing phone numbers Outbound Caller ID Name Support for Outbound Caller ID Name (CNAM) is a component of caller ID that displays your name or company name on the Caller ID display of the party that you are calling Amazon Chime Voice Connector makes it easy to set calling names for Amazon Chime Voice Connector phone numbers using the AWS Management C onsole Amazon make s the necessary changes to the Line Information Database (LIDB) so that your configured name appear s on outbound phone calls There is no charge to use this feature You can set a defa ult calling name for all the phone numbers in the Amazon Chime account once every 7 days using the AWS Management Console or AWS SDK You can also set and update calling names for each phone number purchased or ported into Amazon Chime Voice Connector The update can take up to 72 hours to propagate during which time the previous setting is still active You can track the status of the calling name updates in the AWS Management C onsole or the AWS SDK When you place a call using Amazon Chime Voice Connect or the call is routed through the public switched telephone network (PSTN) to a fixed or mobile telephone carrier of the called party Note that not all landline and mobile telephone carriers support CNAM or use the same CNAM database as Amazon Chime Voice Connector which can result in the called party either not seeing CNAM or seeing a CNAM that is different from the value you set Call Routing with Load Sharing Amazon Chime Voice Connector provides you with flexibility to configure how inbound calls from PSTN are routed to multiple offices thus allowing you to improve the resiliency of your telephone network Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 10 Inbound Calls Inbound calls to your on premises phone system are routed using user defined priorities and weights to automatically route calls to multiple SIP hosts Calls are routed in priority order first with 1 being the highest priority If hosts are equal in priority calls are distributed among them based on their relative weight This approach is useful for both load balancing and f ailure mitigation If a particular host is unavailable Amazon Chime Voice Connector automatically re route s calls to the next SIP host based on priority and weight This approach allows administrators to send all or a percentage of the calls to one site a nd to reroute the calls to another site in a disaster recovery scenario Outbound Calls For outbound calls from your on premises phone system the hostname is a fully qualified domain name (FQDN) with dynamically assigned multiple IP addresses for load sharing Failover and Load Sharing You can use Amazon Chime Voice Connector groups for fault tolerant cross region routing for inbound calling to your on premises phone system By associating Amazon Chime Voice Connectors in different AW S Regions to a n Amazon Chime Voice Connector group you can create multiple independent routes for inbound calls to your onpremises phone system In the event of loss of connectivity between an AWS Region and your phone system or an Amazon Chime Voice Connector service unavailability in an AWS Region incoming calls route to the next Amazon Chime Voice Connectors in a n Amazon Chime Voice Connector group in priority order For more information see Working with Amazon Chime Voice Connector Groups Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 11 Figure 5: Voice Connector Groups Failover Fax Amazon Chime Voice Connector supports faxing using SIP with either T38 or G711 µ law The SI P messaging when using T38 should follow the format described in RFC 3362 In short much of the SIP messaging stays the same as a voice call One change is the “image/t38” MIME content type is added in the SDP to indicate a T38 media stream will be pres ent Modern PBX and SBC systems will recognize T38 and its messaging format Access Access to the Amazon Chime Voice Connector can be provided through the internet or by using AWS Direct Connect Internet Access You can connect to Amazon Chime Voice Connector using the internet The bandwidth between Amazon Chime Voice Connector and your SIP infrastructure must be sufficient Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 12 to handle the number of simultaneous calls For information about the bandwidth requiremen ts see Network Configuration and Bandwidth Requirements AWS Direct Connect Access You can connect using AWS Direct Connect public virtual interfaces which in many cases can reduce your network costs as it is more cost effective than Multiprotocol Label Switching (MPLS ) AWS Direct Connect can also increase bandwidth throughput and provide a more consistent network experience than internet based connections When you combine Amazon Chime Voice Connector with AWS Direct Connect your voice call sessions use a single provider Real time Audio Stream ing to Amazon Kinesis Video Streams Amazon Chime Voice Connector can stream audio from telephone calls to Amazon Kinesis Video Streams in real time and gain insights from your business’ conversa tions Amazon Kinesis Video Streams is an AWS service that makes it easy to accept durably store and encry pt realtime media and connect it to other services for analytics voice transcription machine learning (ML) playback and other processing You c an process audio streams with services like AWS Lambda Amazon Transcribe or Amazon Comprehend to build call recording transcription and analysis solutions For each audio call that is streamed to Kinesis Video Streams two separate Kinesis streams are created for the caller and call recipient media streams Each Kinesis stream within an audio call contains metadata such as the TransactionId and the VoiceConnectorId which can be used to easily filter the audio streams within the same phone call You can enable media streaming for all phone calls placed on the Amazon Chime Voice Connector using the Amazon Chime console or you can enable real time audio streaming on a per call basis using SIPREC INVITE For more inform ation on streaming audio to Kinesis see Streaming Amazon Chime Voice Connector Media to Kinesis Audio Streaming using SIPREC You can also send a SIPREC INVITE from your existing on premises telephone system (Session Border Controller or IP PBX) to Amazon Chime Voice Connector to initiate a realtime audio stream to Amazon Kinesis Video Streams You can use this feature to integrate your existing on premises phone system with AWS services for analytics Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 13 voice transcription machine learning (ML) playback and other real time processing After receiving the SIPREC INVITE from your on premises phone system Amazon Chime Voice Connector then sends the caller and call recipient media flows to your Amazon Kinesis Video Stream to connect the media streams to other AWS services for other process ing For more information on using SIPREC INVITE to stream media to Kinesis see Streaming Amazon Chime Voice Connector Media to Kinesis Figure 6: SIPREC Support Monitoring Amazon Chime Voice Connectors You can monitor Amazon Chime Voice Connector using Amazon CloudWatch which collects raw data and processes it into readable near real time metrics These metrics are kept for 15 months so that you can access historical information and gain a better perspective on how your audio service is performing Amazon Chime Voice Connector sends metrics to Amazon CloudWatch Metrics that capture and process performance metr ics across all Voice Connectors in your AWS Account You can use Amazon CloudWatch Metrics to create dashboards and setup alarms to monitor the performance and availability of your calling solution You can use Amazon CloudWatch Logs when configuring new V oice Connectors and troubleshooting issues For more in formation see Monitoring Amazon Chime with Amazon Clo udWatch Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 14 CloudWatch Metrics Amazon CloudWatch Metrics provi des a near real time stream of system events that describe metrics pertaining to the usage and performance of your Amazon Chime Voice Connectors Using the Amazon CloudWatch Metrics you can create dashboards set up automated alarms respond quickly to operational changes and take corrective actions CloudWatch Logs You can choos e to send SIP Message Capture L ogs from your Voice Connector to CloudWatch Logs You can use SIP Message Capture Logs when setting up new Voice Connectors or to tro ubleshoot issues with existing Voice Connectors For more information see Monitoring Amazon Chime with Amazon CloudWatch Conclusion Amazon Chime Voice Connector is simpl e to set up via the AWS Management Console or AWS SDK and employees can place and receive calls on their desk phones in minutes Calls are delivered to Amazon over a n internet connection using industry standard VoIP With Amazon Chime Voice Connecto r there are no upfront fees commitments or long term contracts You only pay for what you use Contributors Contributors to this document include: • Delyan Radichkov Sr Technical Program Manager Amazon Web Services • Joe Trelli Chime Specialized Soluti ons Architect Amazon Web Services Further Reading For additional information see: • Working with Amazon Chime Voice Connectors • Amazon Chime Pricing • Amazon Chime Documentation • RFC 3261 Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 15 Document Revisions Date Description April 2020 Added fax support ; updated dialed number requirements for outbound calls November 2019 New features and content updates March 2019 First publication Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 16 Appendix A: Call Detail Record (CDR) Specifications Call Detail Record ( CDR ) Storage Details Call Detail Records (CDRs) are stored in your Amazon S3 bucket based on your bucket retention policy CDR objects are stored using names in the following format: AmazonChimeVoiceConnector CDRs/json/ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID where: • vconID – Amazon Chime Voice Connector ID • yyyy/mm/dd – Year month and day that the call started • HHMMssmmm – Start time of call • transactionID – Amazon Chime Voice Connector transaction ID For example: AmazonChimeVoiceConnector CDRs/json/grdcp7r7fjejaautia8rvb/2019/0 3/01/171000020_123456789 CDR Schema CDR object s are stored with no whitespace or newline characters using the following format: Value Description {"AwsAccountId":" AWSaccount ID" AWS account ID "TransactionId":" transaction ID" Amazon Chime Voice Connector transaction ID UUID "CallId": ”SIPcall ID" Customer facing SIP call ID "VoiceConnectorId":" voice connector ID" Amazon Chime Voice Connector ID UUID Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 17 Value Description "Status":"status" Status of the call "StatusMessage ":"status message" Status message of the call "SipAuthUser ":"sipauth user" SIP authentication name "BillableDurationSeconds ":"billable duration inceconds" Billable duration of the call in seconds "BillableDuration Minutes":"billable duration inminutes" Billable duration of the call in minutes "SchemaVersion ":"schema version" The version of the CDR schema "SourcePhone Number":" source phone number" E164 origination phone number "SourceCountry ":"source country" Country of origination phone number "DestinationPhone Number":" destination phone number" E164 destination phone number "DestinationCountry ":"destination country" Country of destination phone number "UsageType ":"usage type" Usage details of the line item in the Price List API "ServiceCode ":"service code" The code of the service in the Price List API "Direction ":"direction " Direction of the call “ Outbound ” or “Inbound ” "StartTimeEpochSeconds ":"start time epoch seconds" Indicates the call start time in epoch/Unix timestamp format "EndTimeEpochSeconds ":"endtime epoch seconds" Indicates the call end time in epoch/Unix timestamp format "Region":"AWSregion"} AWS region for the Voice Connector "Streaming ":{"true|false "} Indicates whether the Streaming audio option was enables for this call if Streaming is not enabled Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 18 Sample Call Detail Record ( CDR ): { "AwsAccountId": " 111122223333 " "TransactionId": " 879eee6eeec74167b634a2519506d142 " "CallId": " 777a6b953100721d372188753f2059a8@20301139:8080 " "VoiceConnectorId": " abcd112222223333334444 " "Status": "Completed" "StatusMessage": "OK" "SipAuthUser": " 5600" "BillableDurationSeconds": 6 "BillableD urationMinutes": 01 "SchemaVersion": "20" "SourcePhoneNumber": "+ 15105551212 " "SourceCountry": "US" "DestinationPhoneNumber": "+ 16285551212 " "DestinationCountry": "DE" "UsageType": " USE1USUSoutboundminutes" "ServiceCode": "AmazonChimeVoiceConnector" "Direction": "Outbound" "StartTimeEpochSeconds": 1565399625 "EndTimeEpochSeconds": 1565399629 "Region": "us east1" "Streaming": true } Streaming Detail Record (S DR) Storage Details Streaming Detail Record ( SDR ) objects are stored in your Amazon S3 bucket based on your bucket retention policy S DR objects are stored using names in the following format: AmazonChimeVoiceConnector SDRs/json/ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID where: • vconID – Amazon Chime Voice Connector ID • yyyy/mm/dd – Year month and day that the call started • HHMMssmmm – Start time of call Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 19 • transactionID – Amazon Chime Voice Connector transaction ID Streaming Detail Records (SDRs) always c orrespond to a call detail record matching the object prefix for example “ vconID/yyyy/mm/dd/HHMMSSmmmtransactionID ” For example: AmazonChimeVoiceConnector SDRs/json/grdcp7r7fjejaautia8rvb/2019/0 3/01/171000020_123456789 SDR Schema Value Description {"SchemaVersion ":"schema version" The version of the CDR schema "TransactionId ":"transaction id" Amazon Chime Voice Connector transaction ID UUID "CallId":"SIPcall id" Customer facing SIP call ID "AwsAccountId ":"AWSaccount ID" AWS account ID "VoiceConnectorId ":"voice connector id" Amazon Chime Voice Connector ID UUID "StartTimeEpochSeconds ":"start time epoch second" Indicates the call start time in epoch/Unix timestamp format "EndTimeEpochSeconds ":"endtime epoch second" Indicates the call end time in epoch/Unix timestamp format "Status":"status" Status of the call option (Completed Failed etc) "StatusMessage ":"status message" Details of the call option status "ServiceCode ":"service code" The code of the service in the Price List API "UsageType ":"usage type" Usage details of the line item in the Price List API "BillableDurationSeconds ":"billable duration seconds" Billable duration of the call in seconds "Region":"AWSregion"} AWS region for the Voice Connector Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 20 Sample Streaming Detail Record (SDR) { "SchemaVersion": "10" "AwsAccountId": " 111122223333 " "TransactionId": " 879eee6eeec74167b634a2519506d142 " "CallId": " 777a6b953100721d372188753f2059a8@20301139:8080 " "VoiceConnectorId": " abcd112222223333334444 " "StartTimeEpochSeconds": 1565399625 "EndTimeEpochSeconds": 1565399629 "Status": "Completed" "StatusMessage": " Streaming succeeded " "ServiceCode": "AmazonChime" "UsageType": "USE1 VCkinesisaudiostreaming" "BillableDurationSeconds": 6 "Region": "us east1" } Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 21 Appendix B: SIP Signaling Specifications Ports and Protocols Amazon Chime Voice Connector requires the following ports and protocols Signaling AWS Region Destination Ports US East (N Virginia) 380160/23 UDP/ 5060 TCP/5060 TCP/5061 US West (Oregon) 99772530/24 UDP/ 5060 TCP/5060 TCP/5061 Media AWS Region Destination Ports US East (N Virginia) 380160/23 UDP/5000:65000 US East (N Virginia) 525562128/25 UDP/1024:65535 US East (N Virginia) 5255630/25 UDP/1024:65535 US East (N Virginia) 3421295128/25 UDP/1024:65535 US East (N Virginia) 34223210/25 UDP/1024:65535 US West (Oregon) 99772530/24 UDP/5000:65000 Supported SIP Methods OPTIONS INVITE ACK CANCEL BYE Unsupported SIP Methods SUBSCRIBE NOTIFY PUBLISH INFO REFER UPDATE PRACK MESSAGE Required SIP Headers In general the service implements SIP as described in RFC 3261 The following SIP headers are required on all OPTIONS INVITE and BYE requests: CallID Contact CSeq From Max Forwards To Via CANCEL requests must also include these headers with the exception of Contact Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 22 Further details about SIP headers can be found in RFC 3261 § 20 SIP OPTIONS Requirements The Request URI of the SIP OPTIONS requests that are sent to the se rvice must identify the Voice Connector host name For example : OPTIONS sip:abcdefghijklmnop12345voiceconnectorchimeaws SIP/20 SIPREC INVITE Requirements The Request URI must identif y the Voice Connector host name For example: INVITE sip:+16285551212@abcd112222223333334444gvoiceconnectorchimeaws:5060 SIP/20 The user portion of the From: header must have a number in E164 format For example: From: +16285551212 <sip:+16285551212@19216810010 >;tag=gK1005c68e If you experience connectivity issues or dropped packets the potential reason is that the UDP packets are dropped by the participating network elements such as routers or receiving hosts on the internet because the UDP packets are larger than the maximum transmission unit (MTU) You can resolve this issue by either clearing the Don’t fragment (DF) flag or alternatively you can use TCP Dialed Number Requirements • Outbound calls: The dialed number must be valid and presented in E164 format Supported countries can be found under Calling Plan on the Termination Page in the Chime Console Countries can be allowed or disallowed by the customer If a call is placed from a customer PBX to a number that is not v alid the call will be rejected with a SIP 403 Forbidden response The dialed number must be presented in E164 format as the user portion of the Request URI in the SIP INVITE for example: INVITE sip:+12125551212@abcdefghijklmnop12345voiceconnectorchime aws The leading “ +” is required Amazon Web Services Leveraging Am azon Chime Voice Connector for SIP Trunking Page 23 • Inbound calls: The called number is presented in E164 format as the user portion of the Request URI in the SIP INVITE For example: INVITE sip:+1 2065551212@abcdefghijklmnop12345voiceconnectorchimeaws Caller ID Number Requirements • Outbound Calls: The caller ID number is derived from the user portion of the PAsserted Identity: header or the From: header in that order The caller ID must be a valid E164 formatted phone number • Inbound Calls: The caller ID number is presented in E164 format as the user portion of the PAssertedIdentity: and From: headers Caller ID Name The delivery of Caller ID Name for inbound calls to your on premises phone system is not supported You can enable the del ivery of Caller ID name for outbound calls from your on premises phone system using the Outbound Calling Name (CNAM) feature Digest Authentication Digest Authentication is an optional feature and it is implemented as described in RFC 3261 section 22 Call Encryption Enabling encryption in Amazon Chime Voice Connector to use TLS for SIP signaling and Secure RTP (SRTP) for media Encryption is enabled using the Secure Trunking option in the console and the service uses port 5061 When enabled all inbou nd calls use TLS and unencrypted outbound calls are blocked You must import the Amazon Chime root certificate Note that at this time the Amazon Chime Voice Connector service uses a wildcard certificate(*voiceconnector chimeaws ) SRTP is implemented as described in RFC 4568 Amazon Web Services Leveraging Amazon Chime Voice Connector for SIP Trunking Page 24 For outbound calls the service uses the SRTP default AWS counter cipher and HMAC SHA1 message authentication The following ciphers are supported for inbound and outbound calls: AES_CM_128_HMAC_SHA1_80 AES_CM_128_HMAC_SHA1_32 AES_ CM_192_HMAC_SHA1_80 AES_CM_192_HMAC_SHA1_32 AES_CM_256_HMAC_SHA1_80 AES_CM_256_HMAC_SHA1_32 At least one cipher is mandatory but all can be included in preference order There is no additional charge for voice encryption Session Description Protocol (SD P) SDP is implemented as described in RFC 4566 Supported Codecs The service support s G711 µ law and G722 pass through for Amazon Chime meeting dial ins only DTMF Dualtone multifrequency (DTMF) is implemented as described in RFC 4733 (also known as RFC 2833 DTMF) Amazon Web Services Leveraging Amazon Chime Voice Connector for S IP Trunking Page 25 Appendix C: CloudWatch Metrics and Logs Examples CloudWatch Metrics Amazon Chime Voice Connector sends usage and performance metrics to Amazon CloudWatch The namespace is AWS/ChimeVoiceConnector To find a complete list of the CloudWatch Metrics sent by Amazon Chime Voice Connector see Monitoring Amazon Chime with Amazon CloudWatch CloudWatch Logs SIP Capture Log Example CloudWatch Logs log group name pattern /aws/ChimeVoiceConnectorSipMessages/[VoiceConnectorID] {"voice_connector_id":"abcdefg628ghsyzd8bwmh6""event_timestamp":"20 191007T17:16:51Z""call_id":"5bf5ecf1 27a14068a7ee 6bd828a5f54a""sip_message":" \nINVITE sip:+15105551212@abc defg628ghsyzd8bwmh6gvoiceconnectorchimeaws:5 061 SIP/20 \nVia: SIP/20/TLS 19216810010 :8081;branch=z9hG4bK66a2d803;rport \nMaxForwards: 69\nFrom: "Testing Account" <sip:+16285551212@ 19216810010 :8081>;tag=as283a6f9b \nTo: <sip:+15105551212@abcdefg628ghsyzd8bwmh6gvoiceconnectorchimeaws: 5061>\nContact: <sip:+16285551212@ 19216810010 :8081;transport=TLS> \nCallID: 6347f9d4697c1539361a1d97727bd2c8@ 19216810010 :8081\nCSeq: 102 INVITE\nUserAgent: Asterisk PBX 18323 \nDate: Mon 07 Oct 2019 17:16:51 GMT \nAllow: INVITE ACK CANCEL OPTIONS BYE REFER SUBSCRIBE NOTIFY INFO PUBLISH MESSAGE \nSupported: replaces timer\nContentType: application/sdp \nContentLength: 322\n\nv=0\no=root 1248709283 1248709283 IN IP4 19216810010\ns=Asterisk PBX 18323 \nc=IN IP4 19216810010 \nt=0 0\nm=audio 15406 RTP/SAVP 0 101 \na=rtpmap:0 PCMU/8000 \na=rtpmap:101 telephone event/8000 \na=fmtp:101 0 16\na=ptime:20 \na=sendrecv \na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:OkiaSoC0tQG15E7eG21 +7DFprLZku9XkE8hl9Zlc \n'"}
|
General
|
consultant
|
Best Practices
|
Leveraging_Amazon_EC2_Spot_Instances_at_Scale
|
ArchivedLeveraging Amazon EC2 Spot Instances at Scale March 2018 This paper has been archived The latest version is now available at: https://docsawsamazoncom/whitepapers/latest/costoptimizationleveraging ec2spotinstances/costoptimizationleveragingec2spotinstanceshtmlArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction to Spot Instances 1 When to Use Spot Instances 1 How to Request Spot Instances 2 How Spot Instances Work 2 Managing Instance Termination 3 Launch Groups 3 Spot Fleets 4 Spot Request Limits 4 Determining the Status of Your Spot Instances 4 Spot Instance Interruptions 5 Spot Best Practices 5 Spot Integration with Other AWS Services 6 Amazon EMR Integration 6 AWS CloudFormation Integration 6 Auto Scaling Integration 6 Amazon ECS Integration 7 Amazon Batch Integration 7 Conc lusion 7 Archived Abstract This is the fourth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and cont inuously measure your optimization status This paper provides an overview of Amazon EC2 Spot Instances as well as best practices for using them effectively ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 1 Introduction to Spot Instances In addition to OnDemand and Reserved Instances the third major Amazon Elastic Compute Cloud (Amazon EC2) pricing model is Spot Instances With Spot Instances you can use spare Amazon EC2 computing capacity at discounts of up to 90 % compared to On Demand pricing That means you can significantly reduce the cost of running y our applications or grow your application’s compute capacity and throughput for the same budget The only difference between On Demand Instances and Spot Instances is that Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back Unlike Reserved Instances Spot Instances do not require an upfront commitment However because Spot Instances can be terminated if the Spot price exceeds your maximum price or if no capacity is available for the instance type you’ve specified they are b est for flexible workloads When to Use Spot Instances You can use Spot Instances for various fault tolerant and flexible applications Examples include web servers API backends continuous integration/continuous development and Hadoop data processing Workloads that constantly save data to persistent storage —including Amazon Simple Storage Service (Amazon S3) Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon DynamoDB or Amazon Relational Database Service (Amazon RDS) —can work effectively with Spot Instances You can also take advantage of Spot Instances to run and scale applications such as stateless web services image rendering big data analytics and massively parallel computations Spot Instances are typically used to supplement On Demand In stances where appropriate and are not meant to handle 100 % of your workload However you can use all Spot Instances for any stateless non production application such as dev elopment and test servers where occasional downtime is acceptable They are no t a good choice for se nsitive workloads or databases ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 2 How to Request Spot Instances To use Spot Instances y ou create a Spot Instance request that includes the number of instances the instance type the Availability Zone and the maximum price that you ar e willing to pay per instance hour You can create a Spot Instance request using the Launch Instance Wizard from the Amazon EC2 console or Amazon EC2 API For details on how to create a Spot Instance request using the console see Creating a Spot Instance Request For details on how to request Spot Instances through the Amazon EC2 API see RequestSpotInstances in the Amazon EC2 API Reference You can also launch Spot Instances through other AWS services such as Amazon EMR AWS Data Pipeline AWS CloudFormation and Amazon Elastic Container Service (Amazon ECS) as well as through third party tools To learn more about Spot Instance requests see Spot Instance Requests How Spot Instances Work The Spot price is determined by long term trends in supply and demand for EC2 spare capacity You pay the Spot price that's in effect at the beginning of each instance hour for your running instanc e billed to the nearest second With Spot Instances you never pay more than the maximum price you specif y If the Spot price exceeds your maximum price for a given instance or if capacity is no longer available your instance will automatically be terminated (or be stopped/hibernated if you opt for this b ehavior on persistent request) The Spot price may change anytime but in general it will change once per hour and in many cas es less frequently AWS publishes the current Spot price and historical prices for Spot Instances through the describe spot price history command as well a s the AWS Management Console This can help you assess the levels and timing of fluctuations in the Spot price over time Spot Instances perform exactly like other EC2 instances while running and can be terminated when you no longer need them If you termi nate your instance you pay for any partial hour used (as you do for On Demand or Reserved ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 3 Instances) However you are not charged for any partial hour of usage if the Spot price goes above your maximum price and Amazon EC2 interrupts your Spot I nstance Managing Instance Termination Spot offers three features to help you better track and control when Spot Instances run and terminate (or stop/hibernate) • Termination notices – If you need to save state upload final log files or remove Spot Instances from Elastic Load Balancing before interruption you can use termination notices which are issued two minutes prior to interruption To learn more about managing interruptions see Spot Instance Interruptions • Persistent requests – You can opt to set your request to remain open so that a new instance will be launched in its place when the instance is interrupted You can also have your Amazon EBS backed instance stopped upon interruption and restarted when Spot has capacity at your preferred price To learn more about persistent and one time requests see Spot Instance Request States • Block dur ations – If you need to execute workloads continuously for 1–6 hours you can also specify a duration requirement when requesting Spot Instances To learn more about block durations for Spot Instances see Specifying a Duration for Your Spot Instances Launch Groups You can launch a set of Spot Instances at once in a launch group or in an Availability Zone group With a launch group if the Spot service must terminate one of the instances in a launch group it must terminate them all With an Availability Zone group the Spot service launches a set of Spot Instances in the same Availability Zone When launch groups are required try to mi nimize the group size Larger groups have a lower chance of being fulfilled Also be aware that specifying a specific Availability Zone can increase your chances of successfully launching To learn more about launch groups and Availability Zone groups see How Spot Instances Work ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 4 Spot Fleets With a Spot Fleet you can automatically request Spot Instances with the lowest price p er unit of capacity To use a Spot Fleet create a Spot Fleet request that includes the target capacity based on your application needs (in any unit including instances vCPUs memory storage or network throughput) one or more launch specifications for the instances and the maximum price that you are willing to pay To learn more about Spot Fleets see How Spot Fleet Works Spot Request Limits By default there is an ac count limit of 20 Spot Instances per AWS Region If you terminate your Spot Instance but do not cancel the request the request counts against this limit until Amazon EC2 detects the termination and closes the request Spot Instance limits are dynamic Whe n your account is new your limit might be lower than 20 to start but then increase over time In addition your account might have limits on specific Spot Instance types If you submit a Spot Instance request and you receive the error Max Spot Instance c ount exceeded you can go to the AWS Support Center and request a limit increase To learn more about default limits and how to request a limit increase see AWS Service Limits Determining the Status of Your Spot Instances By reviewing Spot status you can see why your Spot requests state has or has not changed and you can learn how to optimize your Spot requests to get them fulfilled To find the Spot status you can use the DescribeSpotInstanceRequests API action or the ec2describe spot instance requests using th e AWS C ommand Line Interface (CLI) The AWS Management Console makes a detailed billing report available which shows Spot Instance start and termination times for all instances You can check the billing report against historical Spot prices via the API to verify that the Spot price billed was correct ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 5 Spot Instance Interruptions You can choose to have the Spot service stop instead of terminate your Amazon EBS backed Spot I nstances when they are interrupted Spot can then fulfill your request by restartin g instances from a stopped state when capacity again becomes available within your price and time requirements To use this new feature choose stop instead of terminate as the interruption behavior when submitting a persistent Spot request When you choos e stop Spot will shut down your instance upon interruption The EBS root device and attached EBS volumes are saved and their data persists When capacity is available again within your price and time requirements Spot will restart your instance Upon res tart the EBS root device is restored from its prior state previously attached data volumes are reattached and the instance retains its instance ID This feature is available for persistent Spot requests and Spot Fleets with the maintain fleet option ena bled You will not be charged for instance usage while your instance is stopped EBS volume storage is charged at standard rates Spot Best Practices Your instance type requirements budget requirements and application design will determine how to apply the following best practices for your application: • Be flexible about instance types Test your application on different instance types when possible Because prices fluctuate independently for each instance type in an Availability Zone you can often get more compute capacity for the same price when you have instance type flexibility Request all instance types that meet your requirements to further reduce costs and improve application performance Spot Fleets enable you to request multiple instance types simultaneously • Choose pools where prices haven't changed much Because prices adjust based on long term demand popular instance types (such as recently launched instan ce families) tend to have more price adjustments Therefore picking older generation instance types that are less popular tends to result in lower costs and fewer interruptions ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 6 Similarly the same instance type can have different prices in different Availability Zones • Minimize the impact of interruptions Amazon EC2 Spot's Hibernate feature allows you to pause and then resume Amazon EBS backed instances when capacity is available Hibernate is just like closing and opening your laptop lid with your app lication starting up right where it left off For more information see Hibernate Your Instance Spot Integration with Other AWS Services Amazon EC2 Spot Instances integrate with several AWS services Amazon EMR Integration You can run Amazon EMR clusters on Spot Instances and significantly reduce the cost of processing vast amounts of data on managed Hadoop clusters You can run your EMR clusters by easily mixing Sp ot Instances with On Demand and Reserved Instances using the instance fleet feature To learn more about setting up an EMR cluster with Spot see the EMR Developer Guide AWS CloudFormation Integration AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources including EC2 Spot and lets you describe any dependencies or special parameters to pass in at runtime For a sample high performance computing framework using AWS Cloud Formation that can use Spot Instances see the cfncluster demo To learn more about setting up AWS CloudFormation with Spot see the Amazon EC2 User Guide Auto Scaling Integration You can use Amazon EC2 Auto Scaling groups to launch and manage Spot Instances maintain application a vailability and scale your Amazon EC2 Spot capacity up or down automatically according to the conditions and maximum prices you define To learn more about using Amazon EC2 Auto Scaling with Spot Instances see the Amazon EC2 Auto Scaling User Guide ArchivedAmazon Web Services – Leveraging Spot Instances at Scale Page 7 Amazon ECS Integration You can run Amazon ECS clusters on Spot Instances to reduce the operational cost of running containerized applications on Amazon ECS The Amazon ECS console is also tightly integrated with Amazon EC2 Spot and you can use the Create Cluster Wizard to easily set up an ECS cluster with Spot Instances Amazon Batch Integration AWS Batch plans schedules and executes your batch computing workloads on AWS AWS Batch dynamically requests for Spot Instances on your behalf reducing the cost of running your batch jobs Conclusion Whether yo u have flexible compute needs or want to augment capacity without growing your budget Spot Instances can be a great way to optimize your AWS costs By properly architecting your workloads you can take advantage of Spot pricing for a wide range of needs For more information about Spot Instances visit the Spot Instances overview
|
General
|
consultant
|
Best Practices
|
Leveraging_AWS_Marketplace_Storage_Solutions_for_Microsoft_SharePoint
|
ArchivedLeveraging A WS Marketplace Storage Solutions for Microsoft SharePoint January 2018 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Introduction 1 About AWS Marketplace 2 About SoftNAS Cloud NAS 4 Architecture Considerations 4 Capacity Planning 4 Storage Performance 5 Fault Tolerance 5 High Availability 5 High Level Architecture 5 Deployment 6 SoftNAS IAM Policy and Role 6 Marketplace AMI Deployment with EC2 Console 8 Limited Access Security Group 10 Configuration 11 Administrative Setup 11 Active Directory Membership 17 SoftNAS Snap Replication 19 SoftNAS SNAP HA 20 Conclusion 22 Contributors 23 Further Reading 23 Document Revisions 24 Archived Abstract Designing a cloud storage solution to accommodate traditional enterprise software such as Microsoft SharePoint can be challenging Microsoft SharePoint is complex and demands a lot of the underlying storage that’s used for its many databases and content repositories To ensure that the selected storage platform can accommodate the availability connectivity and performan ce requirements recommended by Microsoft you need to use third party storage solutions that build on and extend the functionality and performance of AWS storage services An appropriate storage solution for Microsoft SharePoint needs to provide data redund ancy high availability fault tolerance strong encryption standard connectivity protocols point intime data recovery compression ease of management directory integration and support The focus of this paper is to walk through the deployment and co nfiguration of SoftNAS Cloud NAS an AWS Marketplace third party storage product that provides secure highly available redundant and fault tolerant storage to the Microsoft SharePoint collaboration suite Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 1 Introduction Successful Micr osoft SharePoint deployments require significant upfront planning to understand the infrastructure and application architecture required A successful deployment would ensure performance scalability high availability and fault tolerance across all aspec ts of the application The primary component of a successful Microsoft SharePoint architecture is the proper understanding and sizing of the storage system used by the SQL Server databases that store analyze and deliver content for the SharePoint applica tion Microsoft SharePoint requires storage for several key aspects of its architecture to include a quorum for the Windows Services Failover Cluster (WSFC) WSFC witness server CIFS file share Microsoft SQL Server Always On clustered database storage Remote Blob Storage (RBS) and Active Directory integration Microsoft provides detailed guidance on SharePoint storage architecture and capacity planning in the Storage and SQL Server capacity planning and configuration Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 2 (SharePoint Server) documentation on TechNet at https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx This guidance described in the Architecture Considerations section provides details about how you can use a SharePoint implementation the types and numbers of objects that you can store the per formance required for object storage and retrieval and the storage design that best fits the requirements for a SharePoint implementation This guidance drives how you can use the underlying storage provisioned with Amazon AWS in conjunction with AWS Mark etplace third party storage products to provide a successful storage architecture for deploying Microsoft SharePoint on AWS About AWS Marketplace AWS Marketplace is a curated digital catalog that provides a way for customers around the globe to find buy and immediately start using software that runs on AWS The storage software products available on AWS Marketplace are provided and maintained by industry newcomers with born inthecloud solutions as well as existing industry leaders They include many m ainstream storage products that are already familiar and commonly deployed in enterprises AWS Marketplace provides value in several ways: saving money with flexible pricing options access to easy 1 click deployments of preconfigured and optimized Amazon Machine Images (AMIs) software as a service (SaaS) AWS CloudFormation templates and ensures that products are scanned periodically for known vulnerabilities malware default passwords and other security related concerns Several solutions from AWS Marketplace can provide appropriately available and scaled storage for SharePoint implementations You should consider the following when choosing a product: • High availability (HA) – Multiple Availability Zone failover and multiple region failover • Fault tolerance – Multiple availability zone and multiple region replication • Performance – RAID mapping complementary to Amazon Elastic Block Store (Amazon EBS) and instances sized for high IO • Encryption – Integration with AWS Key Management Service (KMS) or built in data encryption capability Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 3 • Compression – Proprietary or industry adopted compression capability • Standard connectivity protocols – iSCSI and CIFS • Point intime data recovery – Proprietary or industry adopted data recovery capability • Active Directory integration – Domain membership with user group and computer controls AWS Marketplace Products for SharePoint integration Product Vendor Product Name Link Datomia Datomizer S3NAS https://awsamazoncom/marketplace/seller profile?id=e5778de2 bea7 48d1 96c9 9bc9e6611458 NetApp ONTAP Cloud for AWS https://awsamazoncom/marketplace/seller profile?id=ba83fe1c 57eb 4bac 93a5 5f5d7da7e2f2 SoftNAS SoftNAS Cloud NAS https://awsamazoncom/marketplace/seller profile?id=28ae3a2c 9300 4a7c 898f6f6df5692092 StarWind StarWind Virtual SAN https://awsamazoncom/marketplace/seller profile?id=395b939f 9b80 4d40 bb58 d099abdb342f Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 4 The solution proposed in this paper uses the AWS Marketplace SoftNAS Cloud NAS product however you can us e other AWS Marketplace storage products to provide similar functionality About SoftNAS Cloud NAS Secure redundant and highly available storage for content is a critical requirement for any collaboration suite SharePoint can accumulate significant amo unts of data over time increasing the size and scope of the infrastructure required to serve this data with the continued expectations around performance and availability Additional details about SoftNAS Cloud NAS capabilities and features are available on the SoftNAS AWS Marketplace product webpage at https://awsamazoncom/marketplace/pp/B00PJ9FGVU Architecture Considerations Capacity Planning SharePoint uses storage in several ways and selecting the appropriate configuration is a key aspect in the overall performance of the SharePoint collaboration suite AWS Marketplace storage product provides storage for the Microsoft SQL Server 2016 databases and for SharePoint Remote BLOB Storage (RBS) which stores larger binary objects (for example Visio diagrams PowerPoint presentations) within a file system outside the SharePoint Microsoft SQL database Microso ft provides detailed guidance related to SharePoint capacity planning in Storage and SQL Server capacity planning and configuration (SharePoint Server) on TechNet that takes into account the type and number of artifacts you plan to store in your SharePoint environment (see https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx ) This guidance helps you select and size the appropriate Amazon EC2 instan ces you need to provide database and content storage capacity and necessary I/O performance to meet your needs Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 5 Storage Performance Your storage configuration varies based on the requirements you gather from the SharePoint capacity planning guidance Amazon EBS volumes can be configured in a variety of ways (for example RAID striping different volume sizes etc) to yield different performance characteristics For high I/O scenarios you can create and attach additional Amazon EBS volumes and stripe using RAID software to increase the total number of I/O operations per second (IOPS) Each Amazon EBS volume is protected from physical drive failure through drive mirroring so using a RAID level higher than RAID 0 is unnecessary Fault Toler ance For multi AZ fault tolerance SoftNAS instances need to be deployed independently because each instance must reside in a separate Availability Zone When you configure SnapReplicate the SoftNAS replication component the Availability Zone of replicat ion source and target are validated to ensure that the instances are not in the same Availability Zone High Availability You need to configure each SoftNAS instance with a second network interface that you’ll use later to establish connectivity for high availability The secondary interface is used to create a virtual IP address within the Amazon Virtual Private Cloud (Amazon VPC) The virtual IP address is used as the target for iSCSI and CIFS storage clients and enables continued connectivity to the Sof tNAS Cloud NAS in the event that the primary instance fails You can add the secondary network interface when you create the instance or at a later time prior to enabling SoftNAS SnapHA HighLevel Architecture To implement the Microsoft SharePoint solution described in this paper includes the following components: • Two AWS Marketplace SoftNAS Cloud NAS instances • Each instance deployed in separate Availability Zones • Each instance deployed with two network interfaces Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 6 • Each ins tance deployed with the appropriate number and configuration of Amazon EBS volumes • SoftNAS Snap Replicate to replicate the source instances to the target instance • SoftNAS SnapHA to provide high availability and failover capability between instances • Virtua l IP address to provide SoftNAS SnapHA cluster connectivity (VIP is allocated from an address range outside the scope of the CIDR block for VPC of each instance) Deployment SoftNAS IAM Policy and Role Prior to deploying the SoftNAS Cloud NAS instances you need to create a custom IAM role that allows the setup and configuration of SoftNAS Snap high availability (HA) You must use the name SoftNAS_HA_IAM for the role because the IAM role is hard coded in the SoftNAS Snap HA application Create the SoftNAS_HA_IAM role with the following policy: Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 7 { "Version": "2012 1017" "Statement": [ { "Sid": "Stmt1444200186000" "Effect": "Allow" "Action": [ "ec2:ModifyInstanceAttribute" "ec2:DescribeInstances" "ec2:CreateVolume" "ec2:DeleteVolume" "ec2:CreateSnapshot" "ec2:DeleteSnapshot" "ec2:CreateTags" "ec2:DeleteTags" "ec2:AttachVolume" "ec2:DetachVolume" "ec2:DescribeInstances" "ec2:DescribeVo lumes" "ec2:DescribeSnapshots" "awsmarketplace:MeterUsage" "ec2:DescribeRouteTables" "ec2:DescribeAddresses" "ec2:DescribeTags" "ec2:DescribeInstances" "ec2:ModifyNetworkInterfaceAttribute" "ec2:ReplaceRoute" "ec2:CreateRoute" "ec2:DeleteRoute" "ec2:AssociateAddress" "ec2:DisassociateAddress" "s3:CreateBucket" "s3:Delete*" "s3:Get*" "s3:List*" "s3:Put*" ] "Resource": [ "*" ] Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 8 The IAM policy grants users permissions to access APIs for Amazon EC2 Amazon S3 and AWS Marketplace • Amazon EC2 permissions allow for management of instance attributes volumes tags snapshots route tables routes network attributes and IP addresses • Amazon S3 permissions allow for the setup of SoftNAS Snap Replication and SnapHA • AWS Marketplace permissions allow for metered billing Marketplace AMI Deployment with EC2 Console You can deploy the SoftNAS Cloud NAS using the Amazon EC2 console To do this open the console select Launch Instance choose AWS Marketplace type SoftNAS in the search box and then select the appropriate SoftNAS storage configuration from the results list } ] } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 9 After you choose a SoftNAS Cloud NAS configuration you can complete the rest of the process to deploy and configure the SoftNAS Cloud NAS instance You need to deploy two SoftNAS Cloud NAS instances to configure fault tolerance and h ighavailability but you need to deploy each instance independently so that you can select separate Availability Zones For this implementation you add instance storage to accommodate the WSFC quorum majority disk SharePoint databases (for example tem pdb content usage search transaction logs) a Microsoft WSFC witness file share and SharePoint RBS Storage using separate Amazon EBS volumes for each database as recommended by Microsoft for optimal performance You can also add initial or additional storage from the SoftNAS GUI after deployment For more information see Storage and SQL Server capacity planning and configuration (SharePoint Server 2013) at https://technetmic rosoftcom/en us/library/cc298801aspx Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 10 To complete the instance deployment follow the Amazon EC2 launch wizard providing the appropriate input for instance type instance configuration details addition of storage tags and security group configuration After you review the launch configuration you need to select a key pair to use for post deployment administration prior to launching the SoftNAS Cloud NAS instance Select the appropriate key pair and then launch the instance Limited Access Security Group SoftNAS Cloud NAS instances require access for administration on ports TCP 22 and TCP 443 and access for iSCSI connectivity on port TCP 3260 SoftNAS Snap Replicate and Snap HA require SSH between instances as well as the additional ICMP Echo Request and Echo Reply configuration Configure inbound security group rules to accommodate this connectivity and to limit inbound traffic from authorized sources You can limit access to the SoftNAS storage to accept only traffic from authorized sources by adding the appropriate sources in the configuration Management access on ports 22 and 443 is required only from the jump server instances iSCSI and CIFS access is required only from the Microsoft SQL Server database instances and WSFC file share witness ICMP and SSH connectivity are required between the subnets used by the SoftNAS Cloud NAS instances Security Group Inbound Source Type Ports SoftnasAdmin Jump Servers and RDGW Servers SSH HTTPS TCP 22 TCP 443 Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 11 Security Group Inbound Source Type Ports SoftnasISCSI Microsoft SQL Servers ISCSI TCP 3260 SoftnasCIFS WSFC Witness Server CIFS CIFS CIFS AD UDP 137 & 138 TCP 139 & 445 TCP 389 SoftnasCluster SoftNAS Replication and HA members SSH ICMP ICMP TCP 22 Echo Request Echo Reply Configuration Administrative Setup After you provision your SoftNAS Cloud NAS instances you access the instances using the Amazon EC2 console Because the SoftNAS EC2 instance is deployed into a private subnet within the Amazon VPC access is restricted through a bastion host or remote desktop gateway server with access to the SoftNAS Cloud NAS security group For more information see Controlling Network Access to EC 2 Instances Using a Bastion Server on the AWS Security Blog at https://awsamazoncom/blogs/security/controlling network access toec2instances using abastion server/ The default user name is softnas and the default password is set as the instance ID which you can find in the Amazon EC2 console After you log in you see a Getting Started Checklist that you can use to configure your SoftNAS storage By following the checklist you can set up and present your storage targets quickly Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 12 The Amaz on EBS storage volumes that you added during deployment are available to each SoftNAS Cloud NAS instance as a device that needs a partition Using the SoftNAS administration interface you need to partition all appropriate devices Optionally you can par tition devices using the SoftNAS command line interface (CLI) After partitioning is complete the devices are available and you can assign them to a storage pool Create storage pools that accommoda te the storage capacity and performance requirements required For this solution you create separate storage pools for each Amazon AWS EBS storage device When you configure the storage pool you can set up an additional layer of encryption that allows So ftNAS Cloud NAS to encrypt data You can use an encryption password or the AWS Key Management Service (KMS) to implement encryption key management For more information see the AWS KMS website at https://aws amazoncom/kms/ Optionally you can create storage pools using the SoftNAS CLI ec2user @ip100133229:~$ /usr/local/bin/softnas cmd parted_command partition_all t { "result": { "msg": "All partitions have been created successfully" "records": { "msg": "All partitions have been created successfully" } "success": true "total": 1 } "session_id": "8756" "success": true } ec2user @ip100133229:~$ /usr/local/bin/softnas cmd createpool /dev/xvdb quorum 0 on LUKSpassword123 standard off on t { Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 13 "result": { "msg": "Create pool 'quorum' was successful" "records": { "Available": 70996566768736002 "Used": 000034332275390625 "compression": "on" "dedup": "off" "dedupfactor": "100x" "free_numeric": 7623198310 "free_space": "71G" } "no_disks": 5 "optimizations": "Compress" "pct_used": "0%" "pool_name": "quorum" "pool_type": "Standard" "provisioning": "Thin" "request_arguments": { "cbPoolCaseinsensitive": "off" "cbPoolTrim": "on" "forcedCreation": "on" "opcode": "createpool" "pool_name": " quorum" "raid_abbr": "0" "selectedItems": [ { "disk_name": "/dev/xvdb" } ] "sync": "standard" "useLuksEncryption": "on" } "status": "ONLINE" "time_updated": "Oct 16 2017 15:43:01" "total_numeric": 7623566950 "total_space": "71G" "used_numeric": 368640 "used_space": "3600K" } "success": true "total": 21 } "session_id": "8756" Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 14 ec2user @ip100133229:~$ /usr/local/bin/softnas cmd createvolume vol_name=quorum pool=quorum vol_type=blockdevice provisioning=thin exportNFS=off shareCIFS=off ShareISCSI=on dedup=on enable_snapshot= off schedule_name=Default hourlysnaps=0 dailysnaps=0 weeklysnaps=0 sync=always pretty_print { "result": { "msg": "Volume 'LUN_quorum' created" "records": { "Available": 70999999999999996 "Snapshots": 0 "Used": 5340576171875e 05 After you create the storage pools you must allocate the capacity in each storage pool to SoftNAS volumes to enable remote connectivity as iSCSI LUNs and CIFS shares Optionally you can create volumes with the SoftN AS CLI iSCSI volume example: "success": true } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 15 "cbSnapshotEnabled": "1" "compression": "off" "compressratio": "100x" "dailysnaps": 0 "dedup": "on" "free_numeric": 76235669503999996 "free_space": "71G" "hourlysnaps": 0 "logicalused": "00G" "minimum_threshold": "0" "nfs_export": null "optimizations": "Dedup" "pct_used": "0%" "pool": "quorum" "provisioning": "Thin" "replication": false "request_arguments": { "cbSnapshotEnabled": "on" "dailysnaps": "0" "dedup": " on" "exportNFS": "off" "hourlysnaps": "0" "opcode": "createvolume" "pool": "quorum" "provisioning": "thin" "schedule_name": "Default" "shareCIFS": "off" "sync": "always" "vol_name": "quorum" "vol_type": "blockdevice" "weeklysnaps": "0" } "reserve_space": 71000534057616997 "reserve_units": "G" "schedule_name": "Default" "status": "ONLINE" "sync": "always" "tier": false "tier_disabled": null "tier_name": null "tier_order": null "tier_uuid": null "time_updated": "Oct 16 2017 15:52:59" Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 16 When you create the iSCSI LUNs the associated iSCSI targets are also created The initial iSCSI target is set up with open connectivity However you can update the configuration for each iSCSI target with the IQN for each iSCSI initiator as well as a user nam e and password that can be used for CHAP authentication between the iSCSI initiators and targets "total_numeric": 76236242943999996 "total_space": "71G" "used_numeric": 5340576171875e 05 "used_space": "00G" "usedbydataset": "56K" "usedbysnapshots": "0B" "vol_name": "LUN_quorum" "vol_path": " " "vol_type": "blockdevice" "weeklysnaps": 0 } "success": true "total": 40 } "session_id": "8756" "success": true } Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 17 You can’t create the iSCSI targets or add IQN and CHAP details using the SoftNAS CLI Active Directory Membership Before you can join the SoftNAS Cloud NAS instances to the Active Directory domain you need to update the hostname of each instance (that is the hostname used by the SoftNAS management interface not the hostname of the EC2 instance) The default hostnam e is based on the IP address of the EC2 instance Depending on the IP address the hostname might contain too many characters to be a valid NETBIOS name which is required for you to add it to Active Directory Update the hostname as appropriate in the SoftNAS web management console to a NETBIOS compliant name For more information see the Naming conventions in Active Directory for computers domains sites and OUs article on the Microsoft website at https://supportmicrosoftcom/en us/help/909264/naming conventions inactive directory forcomputers domains sites and Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 18 You attach th e SoftNAS instance to Active Directory by navigating to the volume configuration page and selecting Active Directory from the top level menu After you select the interface you are prompted for the Active Directory domain name enter a domain user name an d password with appropriate domain join permissions to join it to the domain If the NETBIOS hostname is too long a prompt appears and explains what actions you need to take to correct the error before proceeding Optionally you can use the SoftNAS CLI to attach the SoftNAS Cloud NAS instance to Active Directory Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 19 SoftNAS Snap Replication At this point you’ve finished configuring the primary SoftNAS Cloud NAS instance Now you need to configure the secondary failover instance so that you can configure SNAP Replicate and SNAP HA For th e first step follow the instructions in the previous section to set up the secondary node but stop before you create any volumes because these are created during the replication process After you have configured both the primary and secondary SoftNAS Cloud NAS instances connect to the SoftNAS administration console of the primary instance and navigate to the SnapReplicate / Snap HA menu First you set up replication between the primary and secondary SoftNAS Cloud NAS instances You need to do this from the primary instance You need to use the IP address administrative user name and password of the secondary instance as input After you complete the setup wizard SnapReplicate begins r eplicating each iSCSI LUN from the primary instance to the secondary After the replication process finishes the SnapReplicate replication control plan indicates that Current State for each LUN is SNAPREPLICATED COMPLETE and the secondary instance now ha s the replicated LUNs created and visible within the Volume and LUNs dashboard ec2user @ip100133229:~$ # kinit p adminuser@EXAMPLECOM ec2user @ip100133229:~$ # cd /var/www/softnas/scripts ec2user @ip100133229:~$ # /ad_connectsh c examplecom e EXAMPLE f Adminuser g yourpassword Note The secondary instance should only be configured to include disk partitioning and storage pool creation The replication setup process creates all appropriate volumes CIFS shares and iSCSI targets as a mirror of the source instance Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 20 Optionally you can set up SoftNAS SnapReplicate using the SoftNAS CLI SoftNAS SNAP HA After SnapReplicate replication has been established you can set up Snap HA to enable high availability and failover capability for the SoftNAS Cloud NAS In the ec2user @ip100133229:~$ # softnas cmd snaprepcommand initsnapreplicate remotenode=”REMOTENODEIP” userid=softnas password=”PASSWORD” type=target t Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 21 SnapReplicate / Snap HA control panel choose Add Snap HA to begin the setup process During the setup process select the Virtual IP mode You need to use a virtual IP address outside of the VPC CIDR block to set up Snap HA communication on the secondary network interface When requested enter an IP address that is not addressable within your VPC CIDR range For instance if the VPC CIDR block is 1019500/16 select any other address that doesn’t start with 10195 can work as the virtual IP address required to set up Snap HA It’s important to ensure that the IP address you choose doesn’ t belong to another VPC or CIDR range that’s routed to from this VPC After you provide a virtual IP address you need to enter an AWS Access Key ID and Secret Key These options are greyed out if the SoftNAS_HA_IAM IAM role was attached to each instance Choose Next to confirm that the appropriate permissions are associated with the attached IAM role If the permissions aren’t correct an error appears and the setup process fails If the permissions are correct Choose Start Install to begin the Snap HA installation and configuration Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 22 After preparation and configuration are complete choose Next The Snap HA process completes the installation and then places the SoftNAS Cloud NAS instances in high availability mode After the SnapHA setup is complete choose Finish Optionally you can use the SoftNAS CLI to set up SoftNAS SnapHA Conclusion The solu tion is complete and configured as follows: • The primary and secondary SoftNAS Cloud NAS instances are configured • The primary instance replicates to the secondary instance • Both instances are configured in an active passive high availability failover cluster • SoftNAS Cloud NAS storage is ready to be used by Microsoft SharePoint and SQL Server ec2user @ip100133229:~$ # softnas cmd hacommand add YOUR_AWS_ACCESS_KEY YOUR_AWS_SECRET_KEY VIP 1111 pretty_print Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 23 • Connectivity from client iSCSI initiators and CIFS clients is established using the cluster virtual IP address AWS has a powerful set of tools that you can use to build your next solution In addition to AWS services you can use the software available in AWS Marketplace to build and extend solutions using familiar products from reputable software vendors Contributors The following indiv iduals and organizations contributed to this document: • Israel Lawson Solutions Architect AWS • Kevin Brown Solutions Architect SoftNAS • Ross Ethridge Technical Support Manager SoftNAS Further Reading SoftNAS Resources • AWS Getting Started Guide at https://docssoftnascom/pages/viewpageaction?pageId=3604488 • AWS Design and Configuration Guide at https://wwwsoftnascom/wp/support/aws cloud nasdesign configuration guide/ • AWS Instance Size Guide at https://wwwsoftnascom/wp/produc ts/instance sizerecommendations/#aws • AWS Backend Storage Selection Guide at https://ww wsoftnascom/wp/support/aws storage guide/ • High Availability: Amazon Web Services at https://docssoftnascom/display/SD/High+Availability%3A+Amazon+We b+Se rvices • Cloud Formation Template at https://wwwsoftnascom/docs/softnas/v3/api/Softnas AWSCloudTempl ateHVMjson Archived Amazon Web Services – Leveraging AWS Marketplace Partner Storage Solutions for Microsoft SharePoint Page 24 Microsoft SharePoint SQL Server Resources • Overview of SQL Server in a SharePoint Server 2016 environment at https://docsmicrosoftcom/en us/sharepoint/administration/overview ofsqlserver insharepoint server 2016 and2019 environments • Storage and SQL Server capacity planning and configuration (SharePoint Server) at https://technetmicrosoftcom/en us/library/cc298801(v=office16)aspx • SharePoint Server 2016 Databases – Quick Reference at https://technetmicrosoftcom/en us/library /cc298801(v=office16)aspx#section1a • Database Types and Descriptions in SharePoint Server at https://technetmicrosoftcom/en us/library/cc678868(v=office16)aspx AWS Resources • AWS SoftNAS Whitepaper at https://d0awsstaticcom/whitepapers/softnas architecture onawspdf • AWS Bastion Host Blog Post at https://awsamazoncom/blogs/security/controlling network access toec2 instances using abastion server/ Document Revisions Date Description January 2018 First publication
|
General
|
consultant
|
Best Practices
|
Machine_Learning_Foundations_Evolution_of_Machine_Learning_and_Artificial_Intelligence
|
ArchivedMachine Learning Foundations Evolution of Machine Learning and Artificial Intelligence February 2019 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s prod ucts or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of no r does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Evolution of Artificial Intelligence 1 Symbolic Artificial Intelligence 1 Rise of Machine Learning 5 AI has a New Foundation 6 AWS and Machine Learning 9 AWS Machine Learning Services for Builders 9 AWS Machine Learning Services for Custom ML Models 12 Aspiring Developers Framework 13 ML Engines and Frameworks 13 ML Model Training and Deployment Support 14 Conclusions 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract Artificial Intelligence (AI) and Machine Learning (ML) are terms of interest to business people technicians and researchers around the world Most descriptions of the terms oversimplify their true relationship This paper provides a foundation for understanding artificial intelligence describes how AI is now based on a foundation o f machine learning and provides an overview of AWS machine learning services ArchivedAmazon Web Services Machine Learning Foundations Page 1 Introduction Most articles that discuss the relationship between artificial intelligence (AI) and machine learning (ML) focus on the fact that ML is a do main or area of study within AI Although that is true historically an even stronger relationship exists —that successful artificial intelligence applications are almost all implemented using a foundation of ML techniques Instead of a component machine l earning has become the basis of modern AI To support this theory we review how AI systems and applications worked in the first three decades versus how they work today We begin with an overview of AI’s original structure and approach describe the rise of machine learning as its own discipline show how ML provides the foundation for modern AI review how AWS supports customers using machine learning We conclude with observations about why AI and ML are not as easily distinguished as they might first ap pear Evolution of Artificial Intelligence Symbolic Artificial Intelligence Artificial Intelligence as a branch of computer science began in the 1950s Its two main goals were to 1) study human intelligence by modeling and simulating it on a computer and 2) make computers more useful by solving complex problems like humans do From its inception through the 1980s most AI systems were programmed by hand usually in functional declarative or other high level languages such as LISP or Prolog Several custom languages were creat ed for specific areas (eg STRIPS for planning ) Symbols within the languages represented concepts in the real world or abstract ideas and formed the basis of most knowledge representations Although AI practitioners used standard computer science techniques such as search algorithms graph data structures and grammars a significant amount of AI programming was heuristic —using rules of thumb —rather than algorithmic due to the complexity of the probl ems Part of the difficulty of producing AI solutions then was that to make a system successful all of the conditionals rules scenarios and exceptions needed to be added programmatically to the code ArchivedAmazon Web Services Machine Learning Foundations Page 2 Artificial Intelligence Domains Researchers were inte rested in general AI or creating machines that could function as a system in a way indistinguishable from humans but due to the complexity of it most focused on solving problems in one specific domain such as perception reasoning memory speech moti on and so on Major AI domains at this time are listed in the following table Table 1: Domains in Symbolic AI (1950s to 1980s) Domain Description Problem Solving Broad general domain for solving problems making decisions sati sfying constraints and other types of reasoning Subdomains included expert or knowledge based systems planning automatic programming game playing and automated deduction Problem solving was arguably the most successful domain of symbolic AI Machine Learning Automatically generating new facts concepts or truths by rote from experience or by taking advice Natural Language Understanding and generating written human languages (eg English or Japanese) by parsing sentences and converting them into a knowledge representation such as a semantic network and then returning results as properly constructed sentences easily understood by people Speech Recognition Converting sound waves into phonemes words and ultimately sentences t o pass off to Natural Language Understanding systems and also speech synthesis to convert text responses into natural sounding speech for the user Vision Converting pixels in an image into edges regions textures and geometrical objects in order to mak e sense of a scene and ultimately recognize what exists in the field of vision Robotics Planning and controlling actuators to move or manipulate objects in the physical world Artificial Intelligence Illustrated In the following diagram lower levels depict layers that provide the tools and foundation used to build solutions in each domain For example below the Primary Domains are a sampling of the many Inferencing Mechanisms and Knowledge Representations that were commonly used at the time ArchivedAmazon Web Services Machine Learning Foundations Page 3 Figure 1: Overview of Symbolic Artificial Intelligence The Sample K nowledge Representations stored knowledge and information to be reasoned on by the system Common categories of knowledge represent ations included structured (eg frames which can be compared to objects and semantic networks which are like knowledge graphs) and logic based (eg propositional and predicate logic modal logic and grammars) The advantage of these symbolic knowledge representations over other types of models is that they are transparent explainable composable and modifiable They support many types of inferencing or reasoning mechanisms which manipulate the knowledge representations to solve problems understand sentences and provide solutions in each domain The AI Language Styles and Infrastructure layers show some types of languages and infrastructure used to develop AI systems at this time Both tended to be specialized and not easily integrated with external data or enterprise systems A Question of Chess and Telephones A question asked at the time was “which is a harder problem to solve: answering the telephone or playing chess at a master level?” The answer is counter intuitive to most people Although even children can answer a telephone properly very few people play chess at a master level However for traditional AI chess is the perfect problem It is ArchivedAmazon Web Services Machine Learning Foundations Page 4 bounded has limited well understood moves and can be solved using heuristic search of the ga me’s state space Answering a telephone on the other hand is quite difficult Doing it properly requires multiple complex skills that are difficult for symbolic AI including speech recognition and synthesis natural language processing problem solving i ntelligent information retrieval planning and potentially taking complex actions Successes of Symbolic AI Generally considered to have disappointing results at least in light of the high expectations that were set symbolic AI did have several successes as well Most of the software deemed useful was turned into algorithms and data structures used in software development today Business rule engines that are in common use were derived from AI’s expert system inference engines and shells Other common com puting concepts credited to or developed in AI labs include timesharing rapid iterative development the mouse and Graphical User Interfaces (GUIs) The list below describes some of the strengths and limitations of this approach to artificial intelligence Table 2: Strengths and Limitations of Symbolic AI Strength Limitation Simulates high level human reasoning for many problems Systems tended not to learn or acquire new knowledge or capabilities autonomously depending instead on regular developer maintenance Problem Solving domain had several successes in areas such as expert systems planning and constrain propagation Most domains including machine learning natural language speech and vision did not produce signi ficant general results Can capture and work from heuristic knowledge rather than step bystep instructions Problem Solving domain specifically expert or knowledge based systems require articulated human expertise extracted and refined using knowledge engineering techniques Encodes specific known logic easily eg enforces compliance rules Systems tended to be brittle and unpredictable at the boundaries of their scope they didn’t know what they didn’t know Straightforward to review internal data structures heuristics and algorithms Built on isolated infrastructure with little integration to external data or systems Provides explanations for answers when requested Requires more context and common sense information to resolve many real world situations ArchivedAmazon Web Services Machine Learning Foundations Page 5 Strength Limitation Does not require significant amounts of data to create Many approaches were not distributed or easily scalable though there were hardware networking and software constraints to distribution as well Requires less compute resources to develop Difficult to create and maintain systems Many tools and algorithms were incorporated into mainstream system development As research money associated with symbolic AI disappe ared many researchers and practitioners turned their attention to different and pragmatic forms of information search and retrieval data mining and diverse forms of machine learning Rise of Machine Learning From the late 1980s to the 2000s several div erse approaches to machine learning were studied including neural networks biological and evolutionary techniques and mathematical modeling The most successful results early in that period were achieved by the statistical approach to machine learning Algorithms such as linear and logistic regression classification decision trees and kernel based methods (ie Support Vector Machines ) gained popularity Later deep learning proved to be a powerful way to structure and train neural networks to solve complex problems The basic approach to training them remained similar but there were several improvements driving deep learning’s success including: • Much larger networks with many more layers • Huge data sets with thousands to millions of training exampl es • Algorithmic improvements to neural network performance generalization capability and ability to distribute training across servers • Faster hardware (such as GPUs and Tensor Cores) to handle orders of magnitude more computations which are required to train the complex network structures using large data sets Deep learning is key to solving the complex problems that symbolic AI could not One factor in the success of deep learning is its ability to formulate identify and use features discovered on its own Instead of people trying to determine what it should look for the deep learning algorithms identified the most salient features automa tically ArchivedAmazon Web Services Machine Learning Foundations Page 6 Problems that were intractable for symbolic AI —such as vision natural language understanding speech recognition and complex motion and manipulation —are now being solved often with accuracy rates nearing or surpassing human capability Today the answer to the question of which is harder for machines —answering the telephone or playing chess at a master level —is becoming harder to answer Although there is important work yet to be done machine learning has made significant progress in enabling ma chines to function more like people in many areas including directed conversations with humans Machine learning has become a branch of computer science in its own right It is key to solving specific practical artificial intelligence problems AI has a New Foundation Artificial intelligence today no longer relies primarily on symbolic knowledge representations and programmed inferencing mechanisms Instead modern AI is built on a new foundation machine learning Whether it is the models or decision tr ees of conventional mathematics based machine learning or the neural network architectures of deep learning most artificial intelligence applications today across the AI domains are based on machine learning technology This new structure for artificial intelligence is depicted in the following diagram The structure of this diagram parallels the diagram of symbolic AI in order to show how the foundation and the nature of artificial intelligence systems have changed Although some of the domains in the to p layer of the diagram remain the same —Natural Language Speech Recognition and Vision —the others have changed Instead of the broad Problem Solving category seen in Figure 1 for symbolic AI there are two more focused categories for predictions and recomm endation systems which are the dominant forms of problem solving systems developed today And in addition to more traditional robotics the domain now includes autonomous vehicles to highlight recent projects in self driving cars and drones Finally since it is now the foundation of the AI domains machine learning is no longer included in the top level domains ArchivedAmazon Web Services Machine Learning Foundations Page 7 Figure 2: Machine Learning as a foundation for Artificial Intelligence There are still many questions and challenges for machine learning The following list provides some of the strengths and limitations of artificial intelligence based on a machine learning foundation Table 3: Strengths and Limitations of ML Based AI Strength Limitation Easy to train new solutions given data and tools Experiencing hype and researchers and practitioners need to properly set expectations Large number of diverse algorithms to solve many types of problems Requires large amounts of clean potentially labeled data Solves problems in all AI domains often approaching or exceeding human level of capability Problems in data such as staleness incompleteness or adversarial injection of bad data can skew results No human expertise or complex knowledge engineering required solutions are derived from examples Some especially statistically based ML algorithms rely on manual feature engineering ArchivedAmazon Web Services Machine Learning Foundations Page 8 Strength Limitation Deep learning extracts features automatically which enables complex perception and understanding s olutions System logic is not programmed and must be learned This can lead to more subjective results such as competing levels of activation where precise answers are needed (eg specific true or false answers for compliance or verification problems) Trained ML models can be replicated and reused in ensembles or components of other solutions Selecting the best algorithm network architecture and hyperparameters is more art than science and requires iteration though tools for hyperparameter optimiza tion are now available Making predictions or producing results is often faster than traditional inferencing or algorithmic approaches Training on complex problems with large data sets requires significant time and compute resources Algorithms for trainin g ML models can be engineered to be distributed and one pass improving scalability and reducing training time It is often difficult to explain how the model derived the results by looking at its structure and results of its training Can be trained and deployed on scalable highperformance infrastructure Most algorithms solve problems in one step so no chains of reasoning or partial results are available though outputs can reflect numeric “confidence” Deployed using common mechanisms like microservices / APIs for ease of integrations with other systems An important take away from Table 2 and Table 3 is that they are somewhat complementary MLbased AI can benefit from the strengths of symbolic AI Some ML approaches inclu ding automatically learning decision trees already merge the two approaches effectively Active research continues into other means of combining the strengths of both approaches as well as many open questions Given that today’s AI is built on the new fo undation of machine learning that has long been the realm of researchers and data scientists how can we best enable people from different backgrounds in diverse organizations to leverage it? ArchivedAmazon Web Services Machine Learning Foundations Page 9 AWS and Machine Learning AWS is committed to democratizing machi ne learning Our goal is to make machine learning widely accessible to customers with different levels of training and experience and to organizations across the board AWS innovates rapidly creating services and features for customers prioritized by the ir needs Machine Learning services are no exception In the diagram below you can see how the current AWS Machine Learning services map to the other AI diagrams Figure 3: AWS Machine Learning Services AWS Machine Learning Services for Builders The first layer shows AI Services which are intended for builders creating specific solutions that require prediction recommendation natural language speech vision or other capabilities These intelligent services are created using machine learning and especially deep learning models but do not require the developer to have any knowledge of machine learning to use them Instead these capabilities come pre ArchivedAmazon Web Services Machine Learning Foundations Page 10 trained are accessible via API call and provide customers the ability to add intelligence to their applications Amazon Forecast Amazon Forecast is a fully managed service that delivers highly accurate forecasts and is based on the same technology used at Amazoncom You provide historical data plus any additional data that you believe impacts your forecasts Amazon Forecast examines the data id entifies what is meaningful and produces a forecasting model Amazon Personalize Amazon Personalize makes it easy for developers to create individualized product and content recommendations for customers u sing their applications You provide an activity stream from your application inventory of items you want to recommend and potential demographic information from your users Amazon Personalize processes and examines the data identifies what is meaningful selects the right algorithms and trains and optimizes a personalization model Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using voice and text Amazon Lex pr ovides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging us er experiences and lifelike conversational interactions With Amazon Lex the same deep learning technologies that power Amazon Alexa are now available to any developer enabling you to quickly and easily build sophisticated natural language conversation al bots (“ chatbots ”) Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to f ind insights and relationships in text Amazon Comprehend identifies the language of the text; extracts key phrases places people brands or events; understands how positive or negative the text is and automatically organizes a collection of text files by topic ArchivedAmazon Web Services Machine Learning Foundations Page 11 Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that extracts relevant medical information from unstructured text using advanced machine learni ng models You can use the extracted medical information and their relationships to build or enhance applications Amazon Translate Amazon Translate is a neural machine translation service that delivers fast high quality and affordable language translation Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule based translation algorithms Amazon Translate allows you to localize content such as websites and applications for international users and to easily translate large volumes of text efficiently Amazon Polly Amazon Polly is a service that turns text into lifelike speech allowing you to create applications that talk and build entirely new categories of speech enabled products Amazon Polly is a TexttoSpeech service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice Amazon Transcribe Amazon Transcribe is an automatic speech recogn ition (ASR) service that makes it easy for developers to add speech totext capability to their applications Using the Amazon Transcribe API you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to your applications You just provide an image or video to the Rekognition API and the service can identify the objec ts people text scenes and activities as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial recognition You can detect analyze and compare faces for a wide variety of user verificati on cataloging people counting and public safety use cases ArchivedAmazon Web Services Machine Learning Foundations Page 12 Amazon Textract Amazon Textract automatically extracts text and data from scanned documents and forms going beyond simple optical character recog nition to identify contents of fields in forms and information stored in tables AWS Machine Learning Services for Custom ML Models The ML Services layer in Figure 3 provides more access to managed services and resources used by developers data scientists researchers and other customers to create their own custom ML models Custom ML models address tasks such as inferencing and prediction recommender systems and gu iding autonomous vehicles Amazon SageMaker Amazon SageMaker is a fully managed machine learning (ML) service that enables developers and data scientists to quickly and easily build train and deploy machi ne learning models at any scale Amazon Sage Maker Ground Truth helps build training data sets quickly and accurately using an active learning model to label data combining machine learning and human interaction to make the model progressively better Sage Maker provides fully managed and pre built Jupyter notebooks to address common use cases The services come with multiple built in high performance algorithms and there is the AWS Marketplace for Machine Learning containing more than 100 additional pre trained ML models and algorithms You can also bring your own algorithms and frameworks tha t are built into a Docker container Amazon Sagemaker includes built in fully managed Reinforcement Learning (RL) algorithms RL is ideal for situations where there is not pre labeled historical data but there is an optimal outcome RL trains using rewa rds and penalties which direct the model toward the desired behavior SageMaker supports RL in multiple frameworks including TensorFlow and MXNet as well as custom developed frameworks SageMaker sets up and manages environments for training and provides hyperparameter optimization with Automatic Model Tuning to make the model as accurate as possible Sagemaker Neo allows you to deploy the same trained model to multiple platforms Using machine l earning Neo optimizes the performance and size of the model and deploys to edge devices containing the Neo runtime AWS has released the code as the open source Neo AI project on GitHub under the Apache Software License SageMaker deployments run models s pread across availability zones to deliver high performance and high availability ArchivedAmazon Web Services Machine Learning Foundations Page 13 Amazon EMR /EC2 with Spark/Spark ML Amazon EMR provides a managed Hadoop framework that makes it easy fast and costeffective t o process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run other popular distributed frameworks such as Apache Spark including the Spark ML machine learning library HBase Presto and Flink in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB Spark and Spark ML can also be run on Amazon EC2 instances to pre process data engineer features or run machine learning models Aspiring Developers Framework In parallel w ith ML Services is the Aspiring Developers Framework layer With a focus on teaching ML technology and techniques to users this layer is not intended for production use at scale Currently the aspiring developers framework consists of two service offeri ngs AWS DeepLens AWS DeepLens helps put deep learning in the hands of developers with a fully programmable video camera tutorials code and pre trained models designed to expand deep learning skills DeepLens offers developers the opportunity to use neural networks to learn and mak e predictions through computer vision projects tutorials and real world hands on exploration with a physical device AWS DeepRacer AWS DeepRacer is a 1/18th scale race car that provides a way to get star ted with reinforcement learning (RL) AWS DeepRacer provides a means to experiment with and learn about RL by building models in Amazon SageMaker testing in the simulator and deploying an RL model into the car ML Engines and Frameworks Below the ML Platform layer is the ML Engines and Frameworks layer This layer provides direct hands on access to the most popular machine learning tools In this layer are the AWS Deep Learning AMIs that equip you with the infrastructure and tools to accelerate deep lear ning in the cloud The AMIs package together several important tools and frameworks and are pre installed with Apache MXNet TensorFlow PyTorch the Microsoft Cognitive Toolkit (CNTK) Caffe Caffe2 Theano Torch Gluon Chainer and Keras to train sophi sticated custom AI models The Deep Learning AMIs let you ArchivedAmazon Web Services Machine Learning Foundations Page 14 create managed auto scaling clusters of GPUs for large scale training or run inference on trained models with compute optimized or general purpose CPU instances ML Model Training and Deployment Support The Infrastructure & Serverless Environments layer provides the tools that support the training and deployment of machine learning models Machine learning requires a broad set of powerful compute options ranging from GPUs for compute intensive de ep learning to FPGAs for specialized hardware acceleration to high memory instances for running inference Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 provides a wide selection of instance types optimized to fit machine learning use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources whether you are training models or running inference on trained models Amazon Elastic Inference Amazon Elastic Inference allows you to attach low cost GPU powered acceleration to Amazon EC2 and Amazon Sage Maker instances for making predictions with your model Rather than attaching a full GPU which is more than required for most models Elastic Inference can provide savings of up to 75% by allowing separate configuration of the right amount of acceleration for the specific model Amazon Elastic Container Service (Amazon ECS) Amazon ECS supports running and scaling containerized applications including trained machine learning models from Amazon SageMaker and containerized Spark ML Serverless Options Serverless options remove the burden of managing specific infrastructure and allow customers to focus on deploying the ML models and other logic necessary to run their systems Some of the serverless ML deployment options provided by AWS include Amazon SageMaker model deployment AWS Fargate for containers and AWS Lambda for serverless code deployment ArchivedAmazon Web Services Machine Learning Foundations Page 15 ML at the Ed ge AWS also provides an option for pushing ML models to the edge to run locally on connected devices using Amazon Sage Maker Neo and AWS IoT Greengra ss ML Inference This allows customers to use ML models that are built and trained in the cloud and deploy and run ML inference locally on connected devices Conclusions Many people use the terms AI and ML interchangeably On the surface this seems incorrect because historically machine learning is just a domain inside of AI and AI covers a much broader set of systems Today the algorithms and models of machine learning replace traditional symbolic inferencing knowledge representations and languages Training on large data sets has replaced hand coded algorithms and heuristic approaches Problems that seemed intractable using symbolic AI methods are modeled consistently with remarkable results using this approach Machine learning has i n fact become the foundation of most modern AI systems Therefore it actually makes more sense today than ever for the terms AI and ML to be used interchangeably AWS provides several machine learning offerings ranging from pre trained ready to use servi ces to the most popular tools and frameworks for creating custom ML models Customers across industries and with varying levels of experience can add ML capabilities to improve existing systems as well as create leading edge applications in areas that we re not previously accessible Contributors Contributors to this document include : • David Bailey Cloud Infrastructure Architect Amazon Web Services • Mark Roy Solutions Architect Amazon Web Services • Denis Batalov Tech Leader ML & AI Amazon Web Services ArchivedAmazon Web Services Machine Learning Foundations Page 16 Further Reading For additional information see: • AWS Whitepapers page • AWS Machine Learning page • AWS Machine Learning Training • AWS Documentation Document Revisions Date Description February 201 9 First publication
|
General
|
consultant
|
Best Practices
|
Managing_User_Logins_for_Amazon_EC2_Linux_Instances
|
ArchivedManaging User Logins for Amazon EC 2 Linux Instances September 2018 This paper has been archived For the latest technical content about the AWS Cloud go to the AWS Whitepapers & Guides page on the AWS website: https://awsamazoncom/whitepapersArchivedArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or servic es each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensor s The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Use Key Pairs for Amazon EC2 Linux Logins 1 The Challenge 2 The Solution 3 An Expect Example 5 Grantin g Login Access: Steps and Commands 6 Automation: The Process 12 Script Development: Linux Commands and Code Samples 14 Confirm Authorization and Network Access 14 Create User Generate Key Pair Install Public Key 14 Key Distribution and Testing 16 Two Sample Scripts 16 Architecture for EC2 Linux Login Access Management 18 Database Tier 18 Application Tier 19 Web Tier 19 Automation Improvements 19 Use Cases 20 Ec2User (Default User) Key Rotation 20 Cross Environment Access 21 Authorization and Permissions for Non Employees 21 Conclusion 21 Contributors 21 Further Reading 22 Archived Abstract Public key and private key pairs are used to l og in to Amazon Elastic Compute C loud ( Amazon EC2) Linux instances and provide robust security The process to manage user logins can be manually intensive if you have many EC2 Linux instances and many users Simplified management of user logins is natively available for EC2 Windows insta nces but not yet for EC2 Linux instances This white paper describes a method to automate the process to grant and revoke login access to users across multiple EC2 Linux instances The description is based on Amazon EC2 Linux but can applied with minor modification s to other types of Amazon EC2 Linux instances The required steps and commands are described in this whitepaper and can be captured in a script or program You can then use the script or program as a tool to automate and simplify login manage ment on other Amazon EC2 Linux instances The target audience for this whitepaper includes Solutions Architects Technical Account Managers Product Engineers and System Administrators All references in this white paper to EC2 instances refer to Amazon EC 2 Linux instances unless otherwise stated ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 1 Introduction Amazon Web Services ( AWS ) generates a public key and private key (key pair) for logging in to each Amazon Elastic Compute Cloud ( Amazon EC2) Linux instance which is an extremely robust security des ign The key pair is used for the Secure Sockets Layer ( SSL) handshake It enables a user to log in to an Amazon EC2 Linux host with an SSH client without having to enter a password Use Key Pairs for Amazon EC2 Linux Logins For Amazon EC2 Linux instance s the default user name is ec2user The public key is store d on the target instance (the instance that the user is requesting access to) in the ec2user home directory ( ~ec2user/ss h/authorized_keys ) The private key i s stored locally on the client devi ce from which the user logs in for example : a PC desktop computer tablet Linux host or Unix host Typically the private key for a n Amazon EC2 Linux instance is downloaded by the users who are authorized to log in to that host For login access to a new EC2 Linux instance you can either generate a new key pair or use an existing key pair Key pai rs can either be generated on the AWS console or created locally The public key of a locally generated key pair can be given a unique name and uploaded to AWS from the AWS Command Line Interface (CLI) Thereafter that key pair can be u sed to l og in to new ly created EC2 instances but only as ec2user ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 2 The Challenge Although using key pairs to log in to EC2 instance s is ve ry robust efficiently managing ac cess to multiple instances for many users with key pairs can be manually in tensive and difficult to automate To simplify the process to manage access to your Amazon EC2 Windows instances you can integrate your EC2 Windows instance with Active Directory You can grant or remove the login access of one user or a group of users to a Windows instance or a group of Windows instances Currently however AWS does not support integration of EC2 Linux instance login s with Active Directory or a Security Assertion Markup Language ( SAML ) compliant authen tication repository such as LDAP Imagine a scenario where ten users have access to one Linux instance and each user logs in to the server as ec2user with the same private key In a situation where one user ’s access to this instance must be removed you would typically have to complete these steps : 1 Generate a new k ey pair 2 Log into the instance and replace t he old public key with the new public key 3 Distribute the new private key to the remaining nine users This process is manual and must be repeated each time you must remove access to the instance for any of the ten users This can be tedious if there are many EC2 Linux instances if you need to temporarily grant user access or quickly revoke user access with out impacting other users for example : in a production environment Next imagine a scenario in which there are ten EC2 Linux instances that share a single key pair In this case a user who has access to one instance automatica lly has access to all ten instances One method to provide more granular login access control is to create ten different key pairs —one for each instance —so that a user only get s the private key s to the specific instances to which that user needs access Although this provides gran ularity it makes private key management difficult For example if the user needs to log in to a hundred different EC2 instances he will need 100 different private keys Furthermore even with a unique key pair for every instance i f a user’s access to a n instance must be removed you still face the problem of recreating reinstallin g and redistributing new key pairs to all the users that have login access to that instance To remove user access to a large fleet for example 100 EC2 Linux instances each with a unique key pair you must create and distribute 100 new key pairs to each user that already has login access to those instances For medium to large environments login access management key distribution and tracking can become complicated and tim e consuming In addition every user authorized to log in to a L inux instance does so with the ec2user account which is root by default That means that ec2user can run any command with sudo This might not always be desi rable You might want to grant a user root login access to EC2 Linux instance s in development but limit the commands that the user can run on production ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 3 instance s For example y ou might want to prevent a user from performing mount or unmount operations on Amazon Elastic File System ( EFS) in production but permit it in development (Although mount privileges for Amazon EFS can be limited through root squashing it is preferable to have finer contro l by attaching sudo privileges to the user especially when granting access to a producti on environment ) The S olution The solution to the preceding challenges is to give each user a unique login name and a single key pair with which to log in to every EC2 Linux instance to which that user is granted access The user gets a unique home direct ory on every EC2 instance to which that user has login access This directory has the same name as the user’s login name This design greatly simplifies login management: Granting a user login access to an EC2 Linux instance simpl y requires creating a hom e directory for that user on that EC2 instance and placing the user’s public key in th at home directory No other users with login access to that instance are affected Conversely when you have to remove a user’s login access to a specific EC2 instance you can simply delete that user’s public key from that user’s home directory on that EC2 Linux instance Again n o other users with login access to that instance are affected If you want to temporarily grant login access to a user you can generate a new key pair place the public key in the user’s home directory and securely send the private key to the user This significantly reduces the overhead associated with key distribution to many users To purge a user from a n instance delete that user’s home d irectory (which should be backed up if it contains files scripts or data) which also removes the user’s login access Lastly each user does not need sudo root privileges on every instance but can have different sudo privilege levels on different insta nces For example sudo can be set to default to root (unlimited permissions) but can be modified to allow only a limited set of commands on a specific instance or group of instances This is controlled entirely through the sudo configuration file on the EC2 instance which is typically /etc/sudoers or /etc/sudoers d/cloud init for Amazon Linux This file can be modified by a root user to set sudo privileges for any user To give a user access to an EC2 instance complete these steps: 1 Login to the target EC2 instance as root ( ec2user ) 2 Creat e the user’s login and home directory 3 Generate a key pair and place the public key in the user’s home directory If the user already has a key pair c opy the public key of that key pair to the user’s home directory on the target EC2 instance 4 Modify the configuration file /etc/ssh/ssh_config to disable password login and allow only ssh login by key pair ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 4 5 Modify the /etc/sudoers d/cloud init file to grant the required sudo permission s to the user 6 Securely send the pr ivate key to the user The user will then be able to log in to the instance with a unique login name and private key To simplify the process so you can easily repeat it for each user you can complete these steps manually and capture the steps in a Bash or Python script: 1 Log in to the target EC2 instance and run the commands to create the user account 2 Set sudo permissions for the user account 3 Grant login access with a key pair The script takes a user ’s login name as input so when you run the sc ript on any target EC2 instance it grant s login access to that user for that specific instance The process to login to the target instance and run the script is usually manual and interactive ( over SSH) but can be automated with a wrapper script written in Expect When you run the Expect wrapper script you don’t have to manually log in to each target EC2 instance to run the commands that crea te the user and enable the user to log in By automating th is process an administrator can grant or revoke acce ss to any number of users for any number of instances Expect is public domain software that enables you to automate control of interactive applications such as SSH SFTP SCP FTP passwd etc These applications interactively prompt and expect a user to enter keystrokes in response This is the case with SSH (secure socket shell) the protocol typically used to securely log in to EC2 Linux instances When you use Expect you can write simple scripts to automate SSH interactions This makes it ideal for automating interactive logins to Linux which does not have a login API Several languages either have ports of Expect for example: Perl (expectpm) Python (pexpect) and Java (expect4j) or have projects implementing Expect like functionality All Linux versions come installed with Expect which is also available on Windows ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 5 An Expect Example One example of how you can use Expect to automate your actions with scripts is a connection to an anonymous FTP server To make a manual FTP connection to an anon ymous FTP server you would typically follow these steps: 1 Open a connection to the FTP server 2 At the name prompt type anonymous 3 At the p assword prompt type your email address to get to the FTP prompt 4 From the FTP prompt select whether to download upload or list files But instead of manually making the connection to the FTP server you can run the following script to automate this interaction with Expect The script connects you to the FTP server and then runs the interact command which gives control to the user The variable in the second line of the script $argv takes the name or IP address of the FTP server as command li ne input #!/usr/local/bin/expect spawn FTP $argv expect "Name" send "anonymous \r" expect "Password:" send chiji@amazoncom \r interact + Expect is based on a subset of TCL which can be used to write large and complicated programs However all the commands that give the user login access are included in the Bash script whic h runs on the target EC2 instance The Expect wrapper script simply automate s the connection to each instance and runs the Bash script that is on each instance This uses simple Expect syntax and is relatively simple to write Additional program features such as setting timeouts and checking command line inputs which need to be included in the wrapper script for robustness might require more advance d syntax ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 6 Granting Login Access: Steps and Commands Scalable management of multiple user logins to a ta rget Amazon EC2 Linux instance requires user account creation key pair generation or public key retrieval public key installation sudo configuration password login disablement and user private key distribution To grant a user John Smith with the login name johnsmith login access to a target EC2 instance with the IP address 101010100 you must complete the following steps Login to the Target EC2 Instance Log in to the target EC2 Linux instance as ec2user with an SSH client such as PuTTY from a Windows host or the default SSH client from a Linux or Mac host You must have the private key for the target EC2 instance on the device from which you log in If you log in from a Linux host the private key has a pem extension but PuTTY requires a p pk extension Use the PuTTYgen client to convert the pem private key to a file with a ppk extension From a Linux or Mac host the command to log in to the EC2 target instance is: $ ssh –i /pathtoprivatekey ec2user@101010100 ECDSA key fingerprint is d3:f2:70:3c:2b:cf:2b:c3:94:e4:94:74:dc:5c:97:4f Are you sure you want to continue connecting (yes/no)? Yes [ec2user@ip101010100] $ Create the User Account After you log in to the EC2 Linux instance you create the new user account To create this account you run these commands: 1 Log in at the root level $ sudo –i 2 Change the user creation configuration so that the ~/ssh directory is created for each new user $ mkdir /etc/skel/ssh ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 7 3 Set the default permissions for the ssh directory of each new user $ chmod 700 /etc/skel/ssh 4 Create the user johnsmith add him to the rootusers group (for sudo root access) and include a summary of John Smith’s role $ useradd c “John Smith Engineer” d /home –k /etc/skel – m g rootusers \ [G other_group] Johnsmith 5 Su to the new user johnsmith log in to his ~/ssh directory generate a key pair and install the public key on the host $ su johnsmith $ cd /ssh $ sshkeygen –t rsa –b 2048 –f /johnsmith N “” John Smith can then use this key pair to log in to every EC2 instance to which he is granted access To revoke access you can simply delete the public key from his home directory on the target host To grant access you must create an account on the target instance (if one does not exist) and install his public key in his home directory 6 Return to the root level $ exit Next you move the private key to the ec2user home directory and rename it This is a security measure The priva te key will be securely sent to the user from the ec2user home directory or written to a keys database for later distribution to the user 1 Move the private key to the ec2user home directory $ mv ~johnsmith/ssh/johnsmith ~ec2 user/privk _johnsmith ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 8 2 Rename the public key to authorized_keys and set the correct permissions (600) to enable the SSH connection $ mv johnsmithpub authorized_keys $ chmod 600 authorized_keys 3 If the user already has a n existing key pair you can copy the user’s public key either from an EC2 instance (to which the user already has login access) from a keys host or from a keys database to the target host Install that public key in the user’s home directory and change the permissions for the key $ scp –i /pathtoprivatekey johnsmith @hostwherehehas \ login/ssh/authorized_keys johnsmithpub $ mv johnsmithpub authorized_keys $ chmod 600 authorized_keys User account creation with the login name is complete Th e login name should be the same as the user’s Windows desktop or Corporate Account Login user name After the account is created and sudo permissions are set the user can log in to the instanc e with the new login name and private key New Ke y Installation and Rotation If the private key is lost perform these steps to remove and replace the lost key : 1 Delete the user’s public key from the home directory on each EC2 instance that the user has access to 2 Generate a new key pair for the user 3 Install the public key of this new key pair in the user’s home directory on the requisite EC2 instances to reinstate login access for that user 4 Securely send the new private key to the user As a mandatory security procedure you should configure your enviro nment to automatically rotate key pairs at frequent intervals For more information see Key Rotation ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 9 Configure Sudo Permissions For Amazon EC2 Linux user privileges are defined in the /etc/sudoers/cloud init file The file usually contains only the entry for the ec2user ec2user ALL=(ALL) NOPASSWD:ALL Add the login name johnsmith to this file to give him root sudo privileges $ sudo –i $ echo “johnsmith ALL=(ALL) NOPASSWD:ALL” >> \ /etc/sudoersd/cloud init If you ha ve a large number of new users to add to the sudo configuration file or if you do not want to manually add each user you can create a file with a list of named user groups and upload that file to each target host The file must include the associated su do privileges for the group When the file is uploaded to the target host the content is appended to the cloud init file You can then assign a new user to one of the groups named in the file and the user inherit s the sudo permissions of that group For e xample you can create a file named grp_permissions which specifies different permissions for different groups $ cat grp_permissions %rootusers ALL=(ALL) NOPASSWD:ALL %dbausers ALL=(ALL) ALL !/usr/bin/passwd root %sysadmin ALL=(root) /bin/mount /bin/ umount %operator ALL=(root) /bin/mount !/bin/umount /efs The rootusers group has full root privileges so any user added to that group during account creation has full sudo privileges The operator group also has root privileges but cannot unmount an EF S file system ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 10 Upload the grp_permissions file to each target EC2 instance and then use the sed command to append the entries in the file to the existing cloudinit file Make sure to verify that the entries in the file do not already exist in the cloudinit file $ sudo –i $ cd /etc/sudoersd $ /bin/sed –i –e ‘$r grp_permissions’ /cloud init Because the sudo syntax is complicated and a syntax error might make it impossible to log in or run sudo on the instance make sure that you are logged in as roo t through a separate terminal when you modify the cloudinit file (/etc/sudoers for other Linux versi ons Errors can be corrected if you are already logged in as root ; you can either correct the incorrect entry or overwrite the file with the backup file ve rsion You should always create a backup copy before you modify the cloudinit file All sudo entries or edits should be made with visudo (not vim) which reviews new entries for syntax errors If it finds an error it gives you the choice to fix the error to exit and not save the changes to the file or to save the changes and exit The last choice is not recommended so visudo marks it with (DANGER!) Because you can break sudo when you update /etc/sudoersd/cloud init from a script new sudo configurati on changes additions and customizations should first be tested on a nonproduction host Sudo configuration files that have been tested and work correctly should then be checked in to a version control repository such as AWS CodeCommit from which all production deployments should be sourced ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 11 Sudo with LDAP Sudo permissions can also be defined in LDAP which can synchronize the sudoers configuration file in a large distributed environment To define sudo permissions in LDAP: 1 Rebuild sudo with LDAP support on each EC2 instance You can choose to rebuild one EC2 instance and then create an Amazon Machine Image (AMI) the Golden Image which you can use to spin up other EC2 instances 2 Update the LDAP sc hema 3 Import the /etc/sudoersd/cloud init file into LDAP 4 Configure the sudoers service in nsswitchconf with this command: sudoers: files ldap For more information see the sudoersLDAP manual page The sudo with LDAP method has many benefits: Because there are only two or three queries per invocation it is very fast Data loaded into LDAP always conforms to the schema So unlike sudo which exits if there is a typo sudo with LDAP continues to run Because syntax is verified when data is inserted in LDAP locking is not necessary with LDAP This means that visudo which provides locking and syntax verification when the /etc/sudoersd/cloud init file is modified is no longer needed For information about how to use a keys host see Step 5 in Script Development: Linux Commands and Code Samples ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 12 Automation : The Process Normally to complete the process to gran t a user login access to a specific EC2 instance an administrator or root user must manually log in t o each target instance and run all the commands described in the preceding sections The Expect wrapper script automat ically logs in to each target EC2 instance uploads the Bash script and then runs it to create accounts and the sudo permissions that ena ble login on that target for the specified users This eliminat es the need to manually log in to each target instance to run the script Information to include in the Expect wrapper script can be provided as a csv flat input file with the format in Figure 1 below Instance IP Address User Login Name User Full Name User Role User Groups Action 1111 johnsmith John Smith SA rootusers users add 1111 heidismith Heidi Smith Supermodel users add 1111 abejohn Abraham John Senator dbausers remove 2222 alberteinstein Albert Einstein Scientiest rootusers add 2222 galoisevariste Galois Evariste Math Whiz mlusers add 3333 genghiskhan Genghis Khan Mongol rootusers remove Figure 1 When the Expect script is run against this input file it takes the information for each user (login name full name user group and user role ) logs in to each instance with the admin’s private key and runs the Bash script This create s a login account (if one does not exist) and adds or revoke s access for each user to that instance The Expect wrapper script and Bash script jointly constitute the base tool for managing EC2 Linux login access However they must be integrated into the authorization and security processes of the organi zation to be used robustly for i nstance management Figure 2 below is a flow chart that shows the typical steps to grant a user login access to a specific EC2 instance It starts at user account creation continues through setting sudo permissions and finishes with login account testing The same steps are performed —either serially or in parallel —to grant or revoke user login access to multiple instances ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 13 Figure 2 : Steps to create and grant login access to a n EC2 Linux instance This document does not include information about the speci fic internal processes that determine which EC2 instances a user can access and which commands the user is authorized to run because they are included in the company’s security and access policy Additional steps might be added as necessary to conform to any custom security or management requirements for the environment Companies and individuals are advised to review AWS Security Best Practices and make sure they understand h ow to properly secure their environments ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 14 Script Development : Linux Commands and Code Samples To successfully automate user logins to make sure that the process is both robust and secure the automation scripts must be created correctly and the procedures must be correct To complete this process the administrator who runs the scripts must perform all the procedures that follow Confirm Authorization and Network Access 1 Confirm that login access for the user to the target L inux instances has been authorized through the requisite internal processes 2 Confirm that t he remote Linux host that runs the Expect wrapper scrip t is able to connect to each target Linux instance to which the user is to be granted access This is important if the target instance might not have a public IP address In that case the host from which the wrapper script is run must be on the same network as the target instance 3 Confirm that after the administrator logs in to each target instance as ec2user the administrator can su to root because the user creation scri pt must be run as root Create User Generat e Key Pair Install Public Key 1 Log in to the target server as ec2user and run the Bash script This create s the user account a home directory with an ssh directory and sets corre ct directory permissions ( 700) a Add the user to the group from which to in herit sudo permissions b Generate or retrieve a key pair and install the public k ey in the user’s home directory c Set the correct permissions on both keys and move t he private key to a secure repository ~ec2user/privk temporarily hold s the private key for download before it is deleted For the specific login name johnsmith these commands cr eate the johnsmith user account generate a key pair and install a copy of the public key in johnsmith’s home directory Both keys are then moved to a subdirectory on the target host ( ~ec2 user/privk ) This is the local key directory and is defined in the variable LOCAL_KEYS_REPO in the Bash script Newly created keys are downloaded from this directory on the target host The private key is then securely forwarded to the user and copies of the private key and public key are moved to the database or repository ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 15 path specified in the variable KEYS_DATABASE A user or application with proper IAM permissions can retrieve this public key from the repository or database and install it on any target EC2 instance to which login access for the user is required The following variables refer to addresses or paths to the locations of users’ SSH keys and must be set in the Bash script $KEYS_DATABASE =”pathtopubickeyslocation forallusers” $LOCAL_KEYS_REPO =”pathtorepository ontargetinstance that temporarily holdsusers’newlycreatedkeypair” $RECEIVED_KEYS_REPO =”Directory ontargetinstance towhich users’publickeysarecopiedoruploaded” 2 If john smith already has a key pair retrieve his public key either from a host to which he already has access or from a keys repository Upload t he public key and the user creation script to the target ins tance The RECEIVED_KEYS_REPO variable specifies a directory on the target instance to which a user’s existing public key should be uploaded o For automation when the user creation script runs on the target Linux instance it first verif ies if a public key for the user is present in the RECEIVED_KEYS_REPO directory If it is not the script generates a new key pair installs the public key prompts the admin on the remote host to download both keys from LOCAL_KEYS_REPO and then deletes b oth keys o If a public key with the user’s login name is present in the local RECEIVED_KEYS_REPO directory then t hat public key is moved to the ssh directory of that user on that instance to grant login access to the target instance o The keys database address is included in the shell variable KEYS_DATABASE and keeps the login data for each user (full name login name public key private key authorized hosts sudo permissions and other user metadata ) The KEYS_DATABASE could refer to a csv file in S3 o r an AWS managed relational database such as Amazon Relational Database Service (Amazon RDS ) which provides six familiar database engines to choose from (Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server ) Amazon Aurora is a MySQL compatible relational database engine that combines the speed and availability of high end commercial databases with the simplicity and cost effectivene ss of open source databases Amazon Aurora provides up to five times better performance than MySQL with the security availability and reliability of a commercial database at one tenth the cost For more information about the infrastructure design on AWS that provides robust login access management see Architecture for EC2 Linux Login Access Management ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 16 The Bash script can read the keys database to find the existing public key and other metadata for a user It can also write a new key pair to the keys database For greater security you can create separate keys database s for user public and private keys Key Distribution and Testing 1 Download the private key for the user to the key s database To test sudo for this user update the /etc/sudoersd/cloud init file change to the user and run a sudo command For example to test a login for john smith who has full root access run the following commands as ec2user on the target instance : Source John Smith’s full env ironment $ sudo su – john smith Test sudo to root For a user with limited access the explicit sudo commands permitted should be tested $ sudo –i 2 Securely send the private ke y to the user for logins to all EC2 Linux instances to which the user has been granted access The user ’s public key will thereafter be used to grant login access to EC2 Linux instances To remov e the user ’s login access remov e the authorized_key s file from the ssh directory in the user’s home directory Two Sample Scripts The commands to automate user login access that are described in this document are included in a zip file with two working scripts : the user creation Bash script and the auto instance connect Expect wrapper script You can download both scripts from my S3 buck et unzip them and test them on a n EC2 Linux host The Bash user creation script runs the actual commands required to create the user and grant login access on a target instance These are the commands captured when you log in to the target instance and manually perform these operations The script takes a user login the IP address of a key s host and the action to perform (add or remove login access of a user to the instance) as input It then creates the user account generates a key pair or retrieve s the user’s existing public key installs the public key in the user’s home sets the user’s sudo privileges and tests the user ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 17 account To revoke user access it simply remove s the user’s public key from the user’s h ome directory on the target instance It must be runs as root A forloop in the Bash script can be used to revoke or grant login access to multiple users for the same EC2 instance The Expect auto instance connect script automates conn ection to each target instance It is designed to be run by an admin istrator from a remote Linux host and connects to each target Linux instance (the instances to which the user is to be granted login access) to configure user login access It uploads the user creation Bash script to the instance (if it is not already installed) and then runs it with the required command line inputs —user’s login name keys host IP address add or remove access —to grant or remove login access to the instance for the specified user Because the Expect script simulates the in terac tive commands required to make an SSH connection to an EC2 instance and run commands it requires the path to the private key of the ec2user user who runs it A Forloop in the Expect wrapper script can be used to connect to multiple instances to grant or revoke user login access to one or more users This Expect wrapper script is typically run from an EC2 Linux instance but can also be run from a Windows host The latter requires installation of Expect and SSH packages for Windows For more information see Further Reading After you run both scripts the user is granted login access with a unique login name to the specified EC2 instance s The login name can be the same as the user’s single signon (SSO) login name and can be given root or other limited sudo privileges Both scripts illustrate automation of the process to grant or revoke login access They must be modified before they can be used in production For example you could add logging robust failure recovery etc Administrators and developers might also need to modify the script s for use on other EC2 Linux versions or for custom management and security needs Instead of uploading the user creation Bash script to each target Linux instance at r untime you can preinstall the script on each instance (include it in an Amazon Machine Image) install it as an RPM package on the EC2 Linux instance install it from ec2user data on initial boot or install it from a configuration server such as a Chef server After it is installed on the EC2 Linux instance the Expect wrapper script does not have to load the script on to each target instance instead it connect s to the target and run s the installed scripts ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 18 Architecture for EC2 Linux Login Access Management Database Tier To perform automated user logins in production environments the user login data and associated metadata should be stored in a database Because there is no requirement for millisecond latencies and the amount of login data for a ll users is unlikely to be very large Amazon RDS is an ideal keys database Amazon RDS is a managed database service available in a choice of engines It also provides significant security benefits For more inf ormation s ee the Overview of AWS Security Database S ervices whitepaper The RDS database instance hold s the user data required to provide login acce ss to a ny EC2 Linux host A database schema for user logins should include the following tables and fields : User table – UserID (primary key) user login name first name last name email mobile phone number user role public key private key key pair creation date admin creator of key Linux h ost table – Hostnames IP FQDN EC2 type host function ( database web) environment ( production QA test) Access options table – Group or user (root user poweruser operator and custom sudo configurations etc) sudo permissions authorization date You can choose not to store the user’s private key in the database This means that if the user ’s private key is lost a new key pair must be generated for that user Public keys that are rotated a re irretrievably lost and s o old public keys should not be retained in the database Only administrators should have read access to the keys database The database should also be replicated across Availability Zones (MultiAZ) for high availability otherwise it might not be possible to grant access to users if the database is down or unreachable Because significant security problems could arise if the users ’ login data and metadata are compromised the data in the Amazon RDS datab ase instance should be encrypted at rest Amazon RDS also supports encrypting an Oracle or SQL Server DB instance with Transparent Data Encryption (TDE) TDE can be used in conjunction with encryption at rest although using TDE and encryption at rest simultaneously might cause a slight decrease in database performance To manage the keys used to encrypt and decrypt your Amazon RDS resources you can use the AWS Key Management Service (KMS) AWS KMS combines secure highly available hardware and softwa re to provide a key management system that is scaled for the cloud With AWS KMS you can create encryption keys and define the policies that control how these keys can be used AWS KMS supports CloudTrail so you can audit key usage to verify that keys are used appropriately ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 19 You can use SSL from your application to encrypt a connection to a DB instance that runs MySQL MariaDB Amazon Aurora SQL Server Oracle or PostgreSQL This provides end toend encryption of data in transit and at rest Application Tier The infrastructure for login access management in a production environment can be configured as a standard three tier architecture with the database behind a Security Group that is only reachable from the application server The application server can be a small T2 instance The user creation script does not query the database directly but goes through th e application server The user creation script and the Expect wrapper script can be configured to run only from the application server You can then limit logins to the application server to specific administrators Web Tier The web tier provides the inte rface through which user s can request login access to specific servers and enables user s to securely download their private key s The web server can also be a T2 instance and should only allow connections over HTTPS Connections from the web server to the application server can also be encrypted Automation Improvements To simplify administration of user login accounts you can add a graphical user interface ( GUI) in front of the backend From this interface you can click a user name and select the group of instances to which you want to grant or remove access for that user The backend processing is still performed by the user creation script and the auto instance connect script After a production version of the automation script is built with the threetier architectur e discussed in the preceding section integration with A ctive Directory (AD) or any SAML 20 compliant system is feasible The target Linux instances as well as the privileges the user should have on the instance s (root or non root) can be r ead from your AD server and mapped to the Linux user group that has equivalent sudo permissions When the user account is created it automatically inherits the sudo privileges of that Linux group However to implement this solution you must specif y a call to the LD AP/ADSI API of your AD server to retrieve the hosts a nd privileges authorized for each EC2 instance for that user The script receives that input and creates the user accounts adds or revokes their access and raises or removes permissio n by updating the group the user belongs to on the target instances ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 20 Use Cases There are several production use cases for which automated login access management are required Ec2User (Default User) Key Rotation Key r otation for the ec2user account should be frequent but is rarely done in production environments simply because of the amount of manual effort required to create and reinstall keys in a moderately large environment When you use the automated login access scripts the effort required is significantly reduced so key pair generation and rotation can be performed more frequent ly which significantly improves security A key pair can be created named and imported to the AWS account with the AWS CLI The import command is: $ aws ec2 import keypair keyname mykey publickey material \ MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuhrGNglwb2Zz/Qcz1zV+ l12fJOnWmJxC2GMwQOjAX/L7p01o9vcLRoHXxOtcHBx0TmwMo+i85HWMUE7aJtYc lVWPMOeepFmDqR1AxFhaIc9jDe88iLA07VK96wY4oNpp8+lICtgCFkuXyunsk4+K huasN6kOpk7B2 w5cUWveooVrhmJprR90FOHQB2Uhe9MkRkFjnbsA/hvZ/Ay0Cflc 2CRZm/NG00lbLrV4l/SQnZmP63DJx194T6pI3vAev2+6UMWSwptNmtRZPMNADjmo 50KiG2c3uiUIltiQtqdbSBMh9ztL/98AHtn88JG0s8u2uSRTNEHjG55tyuMbLD40 QEXAMPLE Output: { "KeyName": "my key" "KeyFingerprint": "1f:51:ae: 28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca" } With this command the public key is imported to AWS When you spin up Linux instances that key pair is available for selection from the drop down list of existing key pair names on the Console Any new instanc e that selects this key pair will have the public key installed on the instance Admin istrators with the private key can log in to new instances they did not create as ec2user For ec2user key rotatio n a new key pair is created named and installed on all instances for ec2user with the user creation script The public key of the new key pair is then imported into AWS with either the console or CLI the old key pair is deleted and the new private key is securely distributed to authorized users ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 21 Cross Environment Access Regardless of the risk associated with granting login access to production systems to developers or third party consultants it is sometimes necessary To provide root access in those circumstances is dangerous The automated login acce ss management tools described in this document can give admin istrators control over the commands that a user can run on a production EC2 instance An administrator could create a group with limited sudo privileges and add user s to this group when accounts are created The administrator could also remove the user’s public key to revoke login access to the instance after the specific task for which access is needed is completed Authorization and Permission s for Non Employees The capacity to grant or revoke login access to target EC2 instances and provide granular control over the actions that can be perform ed by a user on the target instance offers great flexibility It is particularly useful when you need to give login access to temporary employees partn ers consultants software vendors or applications whose actions on target hosts must be limited In addition every action performed on the target host by the user can be monitored and captured by a shell such as sudosh or rootsh which logs all key str okes A tracking shell can be specified when the account is created for a specific user on the target instance Conclusion The concepts explained in this document for automating login access a process which is ordinarily interactive for all types of Linu x and is therefore manually intensive can be used to develop an application or script that has great benefits and broad utility across the enterprise Automated login access management will eventually become a native feature of Amazon EC2 Linux instances For now developing an d using an automation tool will be invaluable to administrators engineers architects system administrators and account managers for managing user access Contributors The following individuals and organizations contributed to th is document: Chiji Uzo AWS Solutions Architect ArchivedAmazon Web Services – Managing User Logins for Amazon EC2 Linux Instances Page 22 Further Reading For more information see these resources : user creation script : https://s3 uswest 2amazonawscom/samplescri pts/user creation Overview of AWS Security – Database Services (whitepaper) : https://d0awsstaticcom/whitepapers/Security/Security_Database_Services_Wh itepaperpdf Expect for Windows : http://docsactivestatecom/activetcl/84/expect4win/ex_usagehtml#cross_platform Open SSH for Windows: http://sshwindowssourceforgenet/
|
General
|
consultant
|
Best Practices
|
Managing_Your_AWS_Infrastructure_at_Scale
|
ArchivedManaging Your AWS Infrastructure at Scale Shaun Pearce Steven Bryen February 2015 This paper has been archived For the latest technical guidance on AWS Infrastructure see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 2 of 32 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 3 of 32 Contents Abstract 4 Introduction 4 Provisioning New EC2 Instances 6 Creating Your Own AMI 7 Managing AMI Builds 9 Dynamic Configuration 12 Scripting Your Own Solution 12 Using Configuration Management Tools 16 Using AWS Services to Help Manage Your Environments 22 AWS Elastic Beanstalk 22 AWS OpsWorks 23 AWS CloudFormation 24 User Data 24 cfninit 25 Using the Services Together 26 Managing Application and Instance State 27 Structured Application Data 28 Amazon RDS 28 Amazon DynamoDB 28 Unstructured Application Data 29 User Session Data 29 Amazon ElastiCache 29 System Metrics 30 Amazon CloudWatch 30 Log Management 31 Amazon CloudWatch Logs 31 Conclusion 32 Further Reading 32 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 4 of 32 Abstract Amazon Web Services (AWS) enables organizations to deploy large scale application infrastructures across multiple geographic locations When deploying these large cloud based applications it’s important to ensure that the cost and complexity of operating such systems does not increase in direct proportion to their size This whitepaper is intended for existing and potential customers —especially architects developers and sysops administrators —who want to deploy and manage their infrastructure in a scalable and predictable way on AWS In this whitepaper we describe tools and techniques to provision new instances configur e the instances to meet your requirements and deploy your application code We also introduce strategies to ensure that your instances remain stateless resulting in an architecture that is more scalable and fault tolerant The techniques we describe allow you to scale your service from a single instance to thousand s of instances while maintaining a consistent set of processes and tool s to manage them For the purposes of this whitepaper w e assume that you have knowledge of basic scripting and core services such as Amazon Elastic Compute Cloud (Amazon EC2) Introductio n When designing and implementing large cloud based applications it’s important to consider how your infrastructure will be managed to ensure the cost and complexity of running such systems is minimiz ed When you first begin using Amazon EC2 it is easy to manage your EC2 instances just like regular virtualized servers running in your data center You can create an instance log in configure the operating system install any additional packages and install your applic ation code You can main tain the instance by installing security patches rolling out new deployments of your code and modifying the configuration as needed Despite the operational overhead you can continue to manage your instances in this way for a long time However your in stances will inevitably begin to diverge from their original specification which can lead to inconsistencies with other instances in the same environment This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments Ultimately it will lead to service issues because your environments will become less predictable and more difficult to maintain The AWS platform provides you with a set of tools to address this challenge with a different approach By using Amazon EC2 and associated services you can specify and manage the desired end state of your infrastructure independently of the EC2 instances and other running components ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 5 of 32 For example with a traditional approach you would alter the configuration of an Apache server running across your web servers by logging in to each server in turn and manually mak ing the change By using the AWS platform you can take a different approach by chang ing the underlying specification of your web servers and launch ing new EC2 instances to replace the old ones This ensures that each instance remains identical; it also reduces the effort to implement the change and reduces the likelihood of errors being introduced When you start to think of yo ur infrastructure as being defined independently of the running EC2 instances and other components in your environments you can take greater advantage of the benefits of dynamic cloud environment s: • Software defined infrastructure – By defining your infrastructure using a set of software art ifacts you can leverage many of the tools and techniques that are used when developing software components This includes managing the evolution of your infrastructure in a version control system as well as using continuous integration (CI) processes to continually test and validate infrastructure changes befo re deploying them to production • Auto Scaling and selfhealing – If you automatically provision your new instances from a consistent specification you can use Auto Scaling groups to manage the number of instances in an EC2 fleet For example you can set a condition to add new EC2 instances in increments to the Auto Scaling group when the average utilization of your EC2 fleet is high You can also use Auto Scaling to detect impaired EC2 instances and unhealthy applications and replace the instances without your intervention • Fast environment provisioning – You can quickly and easily provision c onsistent environments which opens up new ways of working within your teams For example you can provision a new environment to allow testers to validate a new version of your application in their own personal test environment s that are isolated from other changes • Reduce costs – Now that you can provision environments quickly you also have the option to remove them when they are no longer needed This reduce s costs because you pay only for the resources that you use • Blue green deployments – You can deploy new versions of your application by provisioning new instances (containing a new version of the code) beside your existing infrastructure Y ou can then switch traffic between environments in an approach known as bluegreen deployments This has many benefits over traditional deployment strategies including the ability to quickly and easily roll back a deployment in the event of an issue To leverage these advantages your infrastructure must have the following capabilities: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 6 of 32 1 New infrastructure components are automatically provisioned from a known version controlled baseline in a repeatable and predictable manner 2 All instances are stateless so that they can be removed and destroyed at any time without the risk of losing applicat ion state or system data The following figure shows the overall process: Figure 1: Instance Lifecycle and State M anagement The following sections outline tools and techniques that you can use to build a system with these capabilities By moving to an architecture where your instances can be easily provisioned and destroyed with no loss of data you can fundamentally change the way you m anage your infrastructure Ultimately you can scale your infrastructure over time without significantly increasing the operational overhead associated with it Provisioning New EC2 Instances A number of external events will require you to provision new inst ances into your environment s: • Creating new instances or replicating existing environments • Replacing a failed instance in an existing environment • Responding to a “sca le up” event to add additional instances to an Auto Scaling group • Deploying a new version of your software stack (by using bluegreen deployments ) Some of these events are difficult or even impossible to predict so it’s important that the process to create new instances into your environment is fully automated repeatable and consistent The process of automatically provisioning new instances and bringing them into service is known as bootstrapping There are multiple approaches to bootstrap ping your Amazon EC2 instances The two most popular approaches are to either create your own EC2 Instance Version Control System1 Durable Storage 2ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 7 of 32 Amazon Machine Ima ge (AMI) or to use dynamic configuration We explain both approaches in the following sections Creating Your Own AMI An Amazon Machine Image (AMI) is a template that provides all of the information required to launch an Amazon EC2 instance At a minimum it contains the base operating system but it may also include additional configuration and software You can launch multiple instances of an AMI and you can also launch different types of instances from a single AMI You have several options when launch ing a new EC2 instance : • Select an AMI provided by AWS • Select an AMI provided by the community • Select an AMI containing preconfigured software from the AWS Marketplace1 • Create a custom AMI If launch ing an instance from a base AMI containing only the operating system you can further customiz e the instance with additional configuration and software afte r it has been launched I f you create a custom AMI you can launch an instance that already contains your complete software stack thereby removing the need for a ny runtime configuration However b efore you decide whether to create a custom AMI you should understand the advantages and disadvantages Advantages of custom AMIs • Increases s peed – All configuration is packaged into the AMI itself which significantly increases the speed in which new instances can be launched This is particularly useful during Auto Scaling events • Reduce s external dependencies – Packaging everything into an AMI mean s that there is n o dependenc y on the availability of external services when launching new instances ( for example package or code repositories) • Remove s the reliance on complex configuration scripts at launch time – By preconfiguring your AMI scaling events and instance replacement s no longer rely on the successful completion of configuration scripts at launch time This reduces the likelihood of operational issues caused by erroneous scripts Disadvantages of custom AMIs 1 https://awsamazoncom/marketplace ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 8 of 32 • Loss of agility – Packaging everything into an AMI means that even simple code changes and defect fixes will require you to produce a new AMI This increase s the time it takes to develop test and release enhancements and fixes to your application • Complexity – Managing the A MI build process can be complex You need a process that enables the creation of consistent repeatable AMIs where the changes between revisions are identifiable and auditable • Runtime configuration requirements – You might need to make additional customizations to your AMIs based on runtime information that cannot be known at the time the AMI is created For example the database connection string required by your application might change depending on where the AM I is used Given the se advantages and disadvantages we recommend a hybrid approach : build static components of your stack into AMIs and configure dynamic aspects that change regularly (such as application code) at run time Consider the following factors to help you decide what configuration to include within a custom AMI and what to include in dynamic run time scripts: • Frequency of deployments – How often are you likely to deploy enhancements to your system and at what level in your stack will you make the deployments? For example you might deploy changes to your application on a daily basis but you might upgrade your JVM version far less frequently • Reduction on external dependencies – If the configuration of your system depends on other external syst ems you might decide to carry out these configuration steps as part of an AMI build rather than at the time of launching an instance • Requirements to scale quickly – Will your application use Auto Scaling groups to adjust to changes in load? If so how quickly will the load on the application increase? This will dictate the speed in which you need to provision new instances into your EC2 fleet Once you have assessed your application stack based on the preceding criteria you can decide which element s of your stack to include in a custom AMI and which will be configured dynamically at the time of launch The following figure show s a typical Java web application stack and how it could be manage d across AMIs and dynamic scripts ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 9 of 32 Figure 2: Base Foundational and Full AMI Models In the base AMI model only the OS image is maintained as an AMI The AMI can be an AWS managed image or an AMI that you manage that contains your own OS image In the foundational AMI model elements of a stack that change infrequently ( for example components such as the JVM and application server) are built into the AMI In the full stack AMI model a ll elements of the stack are built into the AMI This model is useful if your applicatio n changes infrequently or if your application has rapid auto scaling requirements (which means that dynamically installing the application isn’t feasible ) However e ven if you build your application into the AMI it still might be advantageous to dynamic ally configure the application at run time because it increases the flexibility of the AMI For example it enables you to use your AMIs across multiple environments Managing AMI Builds Many people start by manually configur ing their AMIs using a process similar to the following : 1 Launch the latest version of the AMI 2 Log in to the instance and manually reconfigure it (for example by making package updates or installing new application s) 3 Create a new AMI based on the running instance EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Base AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Foundational AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Full stack AMI Bootstrapping CodeApp ConfigArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 10 of 32 Although this manual process is sufficient for simple applications it is difficult to manage in more complex environments where AMI updates are needed regularly It’s essential to have a consistent repeatable process to create your AMIs It’s also important to be able to audit what has changed between one version of your AMI and another One way to achieve this is to manage the customization of a base AMI by using automated scripts You can develop your own scripts or you can use a configuration management tool For more information about configuration management tools see the Using Configuration Management Tools section in this whitepaper Using automated scripts has a number of advantages over the manual method Automat ion significantly speed s up the AMI creation process In addition you can use version control for your scripts/configuration files which results in a repeatable process where the change between AMI versions is transparent and auditable This automated process is similar to the manual process: 1 Launch the latest version of the AMI 2 Execute the automated configuration using your tool of choice 3 Create a new AMI image based on the running instance You can use a third party tool such as Packer 2 to help automat e the process However many find that this approach is still too time consuming for an environment with multiple frequent AMI builds across multiple environments If you use the Linux operating system you can reduce the time it takes to create a new AMI by customi zing an Amazon Elastic Block Store (Amazon EBS) volume rather than a running instance An Amazon EBS volume is a durable block level storage device that you can attach to a single Amazon EC2 instance It is possible to creat e an Amazon EBS volume from a base AMI snapshot and customise this volume before storing it as a new AMI This replaces the time taken to initializ e an EC2 instance with the far shorter time needed to create and attach an EBS volume In addition this approach makes use of the incremental nature of Amazon EBS snapshots An EBS snapshot is a point intime backup of an EBS volume th at is stored in Amazon S3 Snapshots are incremental backups meaning that only the blocks on the device that have changed after your most recent snapshot are saved For example i f a configuration update changes only 100 MB of the blocks on an 8 GB EBS volume only 100 MB will be stored to Amazon S3 2 https://packerio ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 11 of 32 To achieve this you need a long running EC2 instance that is responsible for attaching a new EBS volume based on the latest AMI build executing the scripts needed to customiz e the volume creating a snapshot of the volume and registering the snapshot as a new version of your AMI For example Netflix uses t his technique in their open source tool called aminator 3 The following figure shows this process Figure 3: Using EBS Snapshots to Speed Up D eployments 1 Create the volume from the latest AMI snapshot 2 Attach the volume to the instance responsible for building new AMIs 3 Run automated provisioning scripts to update the AMI configuration 4 Snapshot the volume 5 Register the snapshot as a new version of the AMI 3 https://githubcom/Netflix/aminator ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 12 of 32 Dynamic Configuration Now that you have decided what to include into your AMI and what should be dynamically configured at run time you need to decide how to complete th e dynamic configuration and bootstrapping process There are many tools and techniques that you can use to configure your instances ranging from simple scripts to complex centralized configuration management tools Scripting Your Own Solution Depending on how much pre configuration has been included into your AMI you might need only a single script or set of scripts as a simple elegant way to configure the final elements of your application stack User Data and cloudinit When you launch a ne w EC2 instance by using either the AWS Management Console or the API you have the option of passing u ser data to the instance You can retrieve the user data from the instance through the EC2 m etadata service and use it to perform automated tasks to conf igure instances as they are first launched When a Linux instance is launched the initialization instructions passed into the instance by means of the user data are executed by using a technology called cloudinit The cloudinit package is an open source application built by Canonical It’s included in many base Linux AMIs (to find out if your distribution supports cloudinit see the distribution specific documentation) Amazon Linux a Linux distribution created and maintained by AWS contains a customized version of cloudinit You can pass two types of user data either shell scripts or cloudinit directives to cloudinit running on your EC2 instance For example the following shell script can be passed to an instance to update all installed p ackages and to configure the instance as a PHP web server : #!/bin/sh yum update y yum y install httpd php php mysql chkconfig httpd on /etc/initd/httpd start The following user data achieve s the same result but us es a set of cloudinit directives: #cloudconfig ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 13 of 32 repo_update: true repo_upgrade: all packages: httpd php phpmysql runcmd: service httpd start chkconfig httpd on AWS Windows AMIs contain an additional service EC2Config that is installed by AWS The EC2Config service performs tasks on the instance such as activating Windows setting the Administrator password writing to the AWS console and performing one click sysprep from within the application If launching a Windows instance the EC2Config service can also execut e scripts passed to the instance by means of the user data The data can be in the form of commands that you run at the cmdexe prompt or Windows PowerShell prompt This approach work s well for simple use cases However as the number of instance roles (web d atabase and so on) grows along with the number of environments that you need to manage your scripts m ight become large and difficult to maintain Additionally user data is limited to 16 KB so if you have a large number of con figuration tasks and associated logic we recommend that you use the user data to download additional scripts from Amazon S3 that can then be executed Leveraging EC2 Metadata When you configur e a new instance you typically need to understand the context in which the instance is being launched For example you m ight need to know the hostname of the instance or which region or Availability Zone the instance has been launched into The EC2 metadata service can be queried to provide such contextual information about an instance as well as retrieving the user data To access the instance metadata from within a running instance you can make a standard HTTP GET using tools such as cURL or the GET command For example to retrieve the host name of the instance you can make an HTTP GET request to the following URL: http://169254169254/latest/meta data/hostname ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 14 of 32 Resource Tagging To help you manage your EC2 resources you can assign your own metadata to each instance in addition to the EC2 metadata that is used to define hostnames Availability Zones and other resources You do this with tags Each tag consists of a key and a value both of which you define when the instance is launched You can use EC2 tags to define further context t o the instance being launched For example you can tag your instances for different environments and roles as shown in the following figure Figure 4: Example of E C2 Tag U sage As long as your EC2 instance has access to the Internet these tags can be retrieved by using the AWS Command Line Interface (CLI) within your bootstrapping scripts to configure your instances based on their role and the environment in which they are being launched Putting it all Together The following figure shows a typical boo tstrapping process using user data and a set of configuration scripts hosted on Amazon S3 i1bbb2637environment = production role = web if2871adeenvironment = dev role = app Key ValueArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 15 of 32 Figure 5: Example of an End toEnd W orkflow This example uses the user data as a lightweight mechanism to download a base configuration script from Amazon S3 The script is responsible for configuring the system to a baseline across all instances regardless of role and environment (for example the script m ight install monitoring agents and ensure that the OS is patched ) This base configuration script use s the CLI to retrieve the instances tags Based on the value of the “role” tag the script download s an additional overlay script responsible for the additional configuration required for the instance to perform its specific role ( for example installing Apache on a web server) Finally the script use s the instances “environment” tag to download an appropriate environment overlay script to carry out the EC2 API Amazon EC2 Instance Amazon S3 BucketBase ConfigurationUser Data Server Role Overlay Scripts Environment Overlay ScriptsRetrieve and process User Data Download base config and executeEC2 Metadata Service Retrieve server role from EC2 API download and execute appropriate script Retrieve server environment from EC2 API download and execute appropriate script Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch RequestArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 16 of 32 final configuration for the environment the instance resides in ( for example setting log levels to DEBUG in the development environment) To protect sensitive information that m ight be contained in your scripts you should restrict access to these assets by using IAM Roles 4 Using Configuration Management Tools Although scripting your own solution works it can quickly become complex when managing large environments It also can become difficult to govern and audit your environment such as identifying change s or troubleshoot ing configuration issues You can address some of these issues by using a configuration management tool to manage instance configurations Configuration management tools allow you to define your environment ’s configuration in code typically by using a domain specific language These domain specific languages use a declarative approach to code where the code describes the end state and is not a script that can be executed Because the environment is defined using code you can track changes to the configuration and apply version control Many configuration management tools also offer additional features such as compliance auditing and search Push vs Pull Models Configuration management tools typically leverage one of two models push or pull The model used by a tool is defined by how a node (a target EC2 instance in AWS) interacts with the master configuration management server In a push model a master configuration management server is aware of the nodes that it needs to manage and pushes the configuration to them remotely These nodes need to be pre registered on the master server Some push tools are agentless and execute configuration remotely using existing protocols such as SSH Others push a package which is then executed locally using an agent The push model typi cally has some constraints when working with dynamic and scalable AWS resources: • The master server needs to have information about the nodes that it needs to manage When you use tools such as Auto Scaling where nodes might come and go this can be a challenge • Push systems that do remote execution do not scale as well as systems where configuration changes are offloaded and executed locally on a node In large 4 http://docsaws amazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 17 of 32 environments the master server m ight get overloaded when config uring multiple systems in parallel • Connecting to nodes remotely requires you to allow specific ports to be allowed inbound to your nodes For some remote execution tools this includes remote SSH The second model is the pull model Configuration management tools that use a pull system use an agent that is installed on a node The agent asks the master server for configuration A node can pull its configuration at boot time or agents can be daemonized to poll the master periodically for configuration changes Pull systems are especially useful for managing dynamic and scalable AWS environments Following are the main benefits of the pull model : • Nodes can scale up and down easily as the master does not need to know they exist before they can be configured Nodes can simply register themselves with the server • Configuration management masters require less scaling when using a pull system because all processing is offloaded and executed locally on the remote node • No specifi c ports need to be opened inbound to the nodes Most tools allow the agent to communicate with the master server by using typical outbound ports such as HTTPS Chef Example Many configuration management tools work with AWS Some of the most popular are Chef Puppet Ansible and SaltStack For our example in this section we use Chef to demonstrate bootstrapping with a configuration management tool You c an use other tools and apply the same principles Chef is an open source configuration management tool that uses an agent (chef client) to pull configuration from a master server (Chef server) Our example shows how to bootstrap nodes by pulling configuration from a Chef server at boot time The example is based on the following assumptions: • You have configured a Chef server • You have an AMI that has the AWS command line tools installed and configured • You have the chefclient installed and included into your AMI First let’s look at what w e are going to configure within Chef We’ll create a simple Chef cookbook that installs an Apache web server and deploys a ‘Hello World’ site A C hef cookbook is a collection of recipes; a recipe is a definition of resources that should be configured on a node This can include files packages permissions and more The default recipe for this Apache cookbook might look something like this: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 18 of 32 # # Cookbook Name:: apache # Recipe:: default # # Copyright 2014 YOUR_COMPANY_NAME # # All rights reserved Do Not Redistribute # package "httpd" #Allow Apache to start on boot service "httpd" do action [:enable :start] end #Add HTML Template into Web Root template "/var/www/html/indexhtml" do source "indexhtmlerb" mode "0644" end In this recipe we install enable and start the HTTPD (HTTP daemon) service Next w e render a template for indexhtml and place it into the /var/www/html directory The indexhtmlerb template in this case is a very simple HTML page : <h1>Hello World</h1> Next the cookbook is uploaded to the Chef server Chef offers the a bility to group cookbooks into r oles Roles are useful in large scale environment s where servers within your environment m ight have many different r oles and cookbooks might have overlapping roles In our example w e add this cookbook to a role called ‘webserver’ Now when we launch EC2 instances (nodes) we can provide EC2 user data to bootstrap them by using Chef To make this as dynamic as possible we can use an EC2 tag to define which Chef role to apply to our node This allows us to use the same user data script for all nodes whichever role is intended for them For example a web server and a database server can use the same user data if you assign different values to the ‘role’ tag in EC2 We also need to consider how our new instance will authenticate with th e Chef server We can store our private key in an encrypted Amazon S3 bucket by using Amazon S3 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 19 of 32 server side encryption5 and we can restrict access to this bucket by using IAM r oles The key can then be used to authenticate with the Chef ser ver The chef client uses a validatorpem file to authenticate to the Chef server when registering new nodes We also need to know which Chef server to pull our configuration from W e can store a prepopulated clientrb file in Amazon S3 and copy this within our user data script You might want to dynamically populate this clien trb file depending on environment but for our example we assume that we have only one Chef server and that a pre populated clientrb file is sufficient You could also include these two files into your custom AMI build The user data would look like this: #!/bin/bash cd /etc/chef #Copy Chef Server Private Key from S3 Bucket aws s3 cp s3://s3 bucket/orgname validatorpem orgname validatorpem #Copy Chef Client Configuration File from S3 Bucket aws s3 cp s3://s3 bucket/clientrb clientrb #Change permiss ions on Chef Server private key chmod 400 /etc/chef/orgname validatorpem #Get EC2 Instance ID from the Meta Data Service INSTANCE_ID =`curl s http://169254169254/latest/meta data/instance id` #Get Tag with Key of ‘role’ for this EC2 instance ROLE_TAG=$(aws ec2 describe tags filters "Name=resource idValues=$ INSTANCE_ID " "Name=keyValues=role" output text) #Get value of Tag with Key of ‘role’ as string ROLE_TAG_VALUE=$(echo $ROLE_TAG | awk 'NF>1{print $NF}') #Create first_bootjson file dynamically adding the tag value as the chef role in the run list echo "{\ "run_list\ ":[\"role[$ROLE_TAG_VALUE] \"]}" > first_bootjson 5 http://docsawsamazoncom/AmazonS3/latest/dev/UsingServerSideEncryptionhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 20 of 32 #execute the chef client using first_bootjson config chefclient j first_bootjson #daemonize the chef client to run every 5 minutes chefclient d i 300 s 30 As shown i n the preceding user data example we copy our client configuration files from a private S3 bucket We then use the EC2 metadata service to get some information about the instance ( in this example Instance ID) Next we query the Amazon EC2 API for any tags with the key of ‘role ’ and dynamically configure a Chef run list with a C hef role of this value Finally we execute the first chef client run by providing the first_bootjson options which include our new run list We then execute chef client once more ; however this time we execute it in a daemonized setup to pull configuration every 5 minutes We now have some re usable EC2 user data that we can apply to any new EC2 instances As long as a ‘role’ tag is provided with a value that matches a role on the target Chef server the instance will be configured using the corresponding Chef cookbooks Putting it all Together The following figure shows a typical workflow from instance laun ch to a fully configured instance that is ready to serve traffic ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 21 of 32 Figure 6: Example of an End toEnd W orkflow EC2 APIEC2 API Amazon EC2 Instance Amazon S3 BucketUser Data Chef config filesRetrieve and process User Data Download private key and clientrb from S3 bucketEC2 Metadata Service Retrieve server role from EC2 API Configure first_bootson to use chef role with tag value Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch Request Pull Config from Chef Server and configure instanceArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 22 of 32 Using AWS Services to Help Manage Your Environments In the preceding sections we discussed tools and techniques that systems administrators and developers can use to provision EC2 instances in a n automated predictable and repeatable manner AWS also provides a range of application management services that help make this proces s simpler and more productive The following figure shows how to sele ct the right service for your application based on the level of control that you require Figure 7: AWS Deployment and Management Services In addition to provisioning EC2 instances these services can also help you to provision any other associated AWS components that you need in your systems such as Auto Scaling groups load balancers and networking components We provide more information about how to use these services in the following sections AWS Elastic Beanstalk AWS Elastic Beanstalk allows web developers to easily upload code without worrying about managing or implementing any underlying infrastructure components Elastic Beanstalk takes care of deployment capacity provisioning load balancing auto scaling and application health monitoring I t is worth noting that Elastic Beanstalk is not a black box service: You have full visibility and control of the underlying AWS resources that are deployed such as EC2 instances and load balancers Elastic Beanstalk supports deployment of Java NET Ruby PHP Python Nodejs and Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk provides a default configuration but you can extend the configuration as needed For example you m ight want to install additional packages from a yum repository or copy configuration files that your application depends on such as a replacement for httpdconf to override specific settings ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 23 of 32 You can write the c onfiguration files in YAML or JSON format and create the files with a config file e xtension You then place the files in a folder in the application root named ebextensions You can use c onfiguration files to manage packages and services work with files and execute commands For more information about using and extending Elastic Beanstalk see AWS Elastic Beanstalk Documentation 6 AWS OpsWorks AWS OpsWorks is an application management service that makes it easy to deploy and manage any applic ation and its required AWS resources With AWS OpsWorks you build application stacks that consist of one or many layers You configure a layer by using an AWS OpsWorks configuration a custom configuration or a mix of both AWS OpsWorks uses Chef the open source configuration managem ent tool to configure AWS r esources This gives you the ability to provide your own custom or community Chef recipes AWS OpsWorks features a set of lifecycle events —Setup Configure Deploy Undeploy and Shutdown —that automatically run the appropriate recipes at the appr opriate time on each instance AWS OpsWorks provides some AWS managed layers for typical application stacks These layers are open and customi zable which mean s that you can add additional custom recipes to the layers provided by AWS OpsWorks or create custom layers from scratch using your existing recipes It is important to ensure that the correct recipes are associated with the correct lifecycle events Lifecycle events run during the following times: • Setup – Occurs on a new instance after it successfully boots • Configure – Occurs on all of the stack’s instances when an instance enters o r leaves the online state • Deploy – Occurs when you deploy an app • Undeploy – Occurs when you delete an app • Shutdown – Occurs when you stop an instance For example the c onfigure event is useful when building distributed systems or for any system that needs to be aware of when new instances are added or removed from the stack You c ould use this event to update a load balancer when new web servers are added to the stack 6 http://awsamazoncom/documentation/elastic beanstalk/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 24 of 32 In addition to typical server configuration AWS OpsWorks manages application deployment and integrates with your application’s code repository This allows you to track application versions and rollback deployments if needed For mo re information about AWS OpsWorks see AWS OpsWorks Documentation 7 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an eas y way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion Compared to Elastic Beanstalk and AWS OpsWorks AWS CloudFormation gives you the most control and flexibility when provisioning resources AWS CloudFormation allows you to manage a broad set of AWS resources For the purposes of this white paper we focus on the features that you can use to bootstrap your EC2 instances User Data Earlier in this whitepaper we described t he process of using user data to configure and customize your EC2 instances (see Scripting Your Own Solution ) You also can include user data in a n AWS CloudFormation template which is executed on the instance once it is created You can include u ser data when specifying a single EC2 instance as well as when specifying a launch configuration The following example shows a launch configuration that provision s instances configured to be PHP web server s: "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration" "Properties" : { "ImageId" : "i 123456" "SecurityGroups" : "MySecurityGroup" "InstanceType" : "m3medium" "KeyName" : "MyKey" "UserData": {"Fn::Base64": {"Fn::Join":[""[ "#!/bin/bash \n" "yum update y\n" "yum y install httpd php php mysql\n" "chkconfig httpd on \n" "/etc/initd/httpd start \n" ]]}} 7 http://awsamazoncom/documentation/opsworks/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 25 of 32 } } cfninit The cfninit s cript is an AWS CloudFormation helper scri pt that you can use to specify the end state of an EC2 instance in a more declarative manner The cfninit script is installed by default on Amazon Linux and AWS supplied Windows AMIs Administrators can also install cfninit on other Linux distributions and then include this into their own AMI if needed The cfninit script parses metadata from the AWS CloudFormation template and uses the metadata to customiz e the instance accordingly The cfninit script can do the followin g: • Install packages from packa ge repositories ( such as yum and aptget) • Download and unpack archives such as zip and tar files • Write files to disk • Execute arbitrary commands • Create users and groups • Enable /disable and start/stop services In an AWS CloudFormation template t he cfninit helper script is called from the user data Once it is called it will inspect the metadata associated with the resource passed into the request and then act accordingly For example you can use the following launch configuration metadata to instruct cfn init to configure an EC2 instance to become a PHP web server (similar to the preceding user data example): "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration" "Metadata" : { "AWS::CloudFormation::Init" : { "config" : { "packages" : { "yum" : { "httpd" : [] "php" : [] "phpmysql" : [] } } "services" : { "sysvinit" : { "httpd" : { ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 26 of 32 "enabled" : "true" "ensureRunning" : "true" } } } } } } "Properties" : { "ImageId" : "i 123456" "SecurityGroups" : "MySecurityGroup" "InstanceType" : "m3medium" "KeyName" : "MyKey" "UserData": {"Fn::Base64": {"Fn::Join":[""[ "#!/bin/bash \n" "yum update y awscfnbootstrap\ n" "/opt/aws/bin/cfn init stack " { "Ref" : "AWS::StackId" } " resource MyLaunchConfig " " region " { "Ref" : "AWS::Region" } " \n" ]]}} } } For a detailed walkthrough of bootstrapping EC2 instances by using AWS CloudFormation and its related helper scripts see the Bootstrapping Applications via AWS CloudFormation whitepaper8 Using the Services Together You can use the services separately to help you provision new i nfrastructure components but you also can combine them to create a single solution This approach has clear advantages For example you can model an entire architecture including networking and database configurations directly into a n AWS CloudFormation template and then deplo y and manage your application by using AWS Elastic Beanstalk or AWS OpsWorks This approach unifies resource and application management making it easier to apply version control to your entire architecture 8 https://s3amazonawscom/cloudformation examples/BoostrappingApplicationsWithAWSCloudFormationpdf ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 27 of 32 Managing Application and Instance State After you implement a suitable process to a utomatically provision new infrastructure components your system will have the capability to create new EC2 instances and even entire new environments in a quick repeatable and predictable manner However in a dynamic cloud environment you will also need to consider how to remove EC2 instances from your environments and what impact this might have on the service that you provide to your users There are a number of reasons why an instance might be removed from your system: • The instance is terminated as a result of a hardware or software failure • The instance is terminated as a response to a “scale down ” event to remove instances from an Auto Scaling group • The instance is terminated because you’ve deployed a new version of your software stack by using bluegreen deployments (instances running the older version of the application are terminated after the deployment) To handle the removal of instance s without impacting your service you need to ensure that your application instances are stateless This means that all system and application state is stored and managed outside of the instances themselves There are many forms of system and application state that you need to consider when designing your system as shown in the following table State Examples Structured application data Customer orders Unstructured application data Images and documents User session data Position in the app; contents of a shopping cart Application and system logs Access logs; security audit logs Application and system metrics CPU load; network utilization Running stateless application instances means that no instance in a fleet is any different from its counterparts This offers a number of advantages: • Providing a robust service – Instances can serve any request from any user at any time I f an instance fails subsequent requests can be routed to alternative instance s while the failed instance is replaced This can be achieved with no interruption to service for any of you r users • Quicker less complicated bootstrapping – Because your instances don’t contain any dynamic state your bootstrapping process needs to concern itself only with provision ing your system up to the application layer There is no need to try to ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 28 of 32 recover state and data which is often large and therefore can significantly increase bootstrapping times • EC2 instances as a unit of deployment – Because all state is maintained off of the EC2 instances themselves you can replace the instance s while orchestrating application deployments This can simplify your deployment processes and allow new deployment techniques such as bluegreen deployments The following section describes each form of application and instance state and outlines some of the tools and techniques that you can use to ensure it is store d separately and independently from the application instances themselves Structured Application Data Most applications produce structured textual data such as customer orders in an order management system or a list of web pages in a CMS In most cases this kind of content is best stored in a database Depending on the structure of th e data and the requirements for acce ss speed and concurrency you m ight decide to use a relational databas e management system or a NoSQL data s tore In either case it is important to store this content in a durable highly available system away from the instances running your application This will ensure that the service you provide your users will not be interrupted or their data lost even in the event of an instance failure AWS offers both relational and NoSQL managed databases that you can use as a persistence layer for your applications We discuss these database options in the following sections Amazon RDS Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up operate and scale a relational database in the cloud It allows you to continue to work with the relational database engines you’re familiar with including MySQL Oracle Microsoft SQL Server or PostgreSQL This means that the code applications and operational tools that you are already using can be used with Amazon RDS Amazon RDS also handles time consuming database man agement tasks such as data backups recover y and patch management which frees your database administrators to pursue higher value application development or database refinements In addition Amazon RDS Multi AZ deployments increase your database availability and protect your da tabase against unplanned outages This give s your service an additional level of resiliency Amazon DynamoDB Amazon Dynamo DB is a fully managed NoSQL database service offering both document (JSON) and key value data models DynamoDB has been designed to provide consistent single digit m illisecond latency at any scale making it ideal for high ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 29 of 32 traffic applications with a requirement for low latency data access DynamoDB manage s the scaling and partitioning of infrastructure on your behalf When you creat e a table you specify how much request capacity you require If your throughput requirements change you can update this capacity as needed with no impact on service Unstructured Application Data In addition to the structured data created by most appli cations some systems also have a requirement to receive and store unstructured resources such as documents images and other binary data For example t his might be the case in a CMS where an editor upload s images and PDFs to be hosted on a website In most cases a database is not a suitable storage mechanism for this type of content Instead you can use Amazon Simple Storage Service (Amazon S3) Amazon S3 provides a highly available and durable object st ore that is well suited to storing this kind of data Once your data is stored in Amazon S3 you have the option of serving these files directly from Amazon S3 to your end users over HTTP(S) bypassing the need for these requests to go to your application instances User Session Data Many applications produce information associated with a user ’s current position within an application For example as user s browse an e commerce site they m ight start to add various items into their shopping basket This information is known as session state It would be frustrating to users if the items in their baskets disappeared without notice so it’s important to store th e session state away from the application instances themselves This ensure s that baskets remain populated even if users ’ requests are directed to an alternative instance behind your load balancer or if t he current instance is removed from service for any reason The AWS platform offers a number of services that you can use to provide a highly available session store Amazon ElastiCache Amazon ElastiCache makes it easy to deploy operate and scale an in memory data store in AWS Inmemory data store s are ideal for storing transient session data due to the low latency these technologies offer ElastiCache supports two open source in memory caching engines: • Memcached – A widely adopted memory object caching system ElastiCache is protocol compliant with Memcached which is already supported by many open source applications as an in memory sessio n storage platform ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 30 of 32 • Redis – A popular open source inmemory key value store that supports data structures such as sorted sets and lists ElastiCache supports master/ slave replication and Multi AZ which you can use to achieve cross AZ redundancy In addition to the in memory data stores offered by Memcached and Redis on ElastiCache some applications require a more durable storage platform for their session data For these applications Amazon DynamoDB offers a low latency highly scalable and durable solution DynamoDB replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage To help customers easily integrate DynamoDB as a session store within their applications AWS provides pre built DynamoDB session handlers for both Tomcat based Java applications9 and PHP applications 10 System Metrics To properly support a production system operational teams need access to system metrics that indicate the overall health of the system and the relative load under which it’s currently operating In a traditional environment this information is often obtained by logging into one of the instances and looking at OS level metrics such as system load or CPU utilization However in an environment where you have multiple instances running and these instances can appear and disappear at any moment this approach soon becomes ineffective and difficult to manage Instead you should push this data to an external monitoring system for collection and analysis Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure This means that the metrics will be available to your operational teams even when the instances themselves have been terminated In addition to tracking metrics you can use Amazon CloudWatch to trigger alarms on the metrics when they pass certain thresholds You can use the alarms to notify your teams and to initiat e further automated actions to deal with issues and bring your system back within its normal operating tolerances For example an automated action could initiate an Auto Scaling policy to increase or decrease the number of instances in an Auto Scaling group 9 http://docsawsamazoncom/AWSSdkDocsJava/latest/DeveloperGuide/java dgtomcat session managerhtml 10 http://docsawsamazoncom/aws sdkphp/guide/latest/feature dynamodb session handlerhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 31 of 32 By default Amazon CloudWatch can monitor a broad range of metrics across your AWS resources That said it is also important to remember that AWS doesn’t have access to the OS or applicatio ns running on your EC2 instances Because of this Amazon CloudWatch cannot automatically monit or metrics that are accessible only within the OS such as memory and disk v olume utilization If you w ant to monitor OS and application metrics by using Amazon CloudWatch you can publish your own metrics to CloudWatch through a simple API request With t his approach you can manage these metrics in the same way that you manage other native metrics including configuring alarms and associated actions You can use the EC2Config service11 to push additional OS level operating metrics into CloudWatch without the need to manually code against the CloudWatch APIs If you are running L inux AMIs you can use the set of sample Perl scripts12 provided by AWS that demonstrate how to produce and consume Amazon CloudWatch custom metrics In addition to CloudWatch you can use third party monitoring solutions in AWS to extend your monitoring capabilities Log Management Log data is used by your operational team to better understand how the system is performing and to diagnose any issues that might arise Log data can be produced by the application itself but also by system components lower down in your stack This might include anything from access logs produced by your w eb server to security audit logs produced by the operating system itself Your operations team need s reliable and timely access to these logs at all times regardless of whether the instance that originally produced the log is still in existence For this reason it’s important to move log data from the instance to a mor e durable storage platform as close to real time as possible Amazon CloudWatch Logs Amazon CloudWatch Logs is a service that allows you to quickly and easily move your system and applicati on logs from the EC2 instances them selves to a centrali zed durable storage platform ( Amazon S3) This ensures that this data is available even when the instance itself has been terminated You also have control over the log retention policy to ensure that all logs are retained for a specified period of time The CloudWat ch Logs service provides a log management agent that you can install onto your EC2 instances to manage the ingestion of your logs into the log management service 11 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/UsingConfig_Wi nAMIhtml 12 http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/mon scripts perlhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 32 of 32 In addition to moving your logs to durable storage the CloudWatch Logs service also allows you to monitor your logs in near real time for specific phrases values or patterns (metrics) You can use t hese metrics in the same way as any other CloudWatch metric s For example you can create a CloudWatch alarm on the number of errors being thrown by your application or when certain suspect actions are detected in your security audit logs Conclusion This whitepaper showed you how to accomplish the following: • Quickly provision new infrastructure components in an automated repeatable and predictable manner • Ensure that no EC2 instance in your environment is unique and that all instances are stateless and therefore easily replaced Having these capabilities in place allows you to think differently about how you provision and manage infrastructure components when compared to traditional environments Instead of manually building each instance and maintaining consistency through a set of operational checks and balances you can treat your infrastructure as if it w ere software By specifying the desired end state of your infrastructure through the software based tools and process es described in this whitepaper you can fundamentally change the way your infrastructure is managed and you can take full advantage of the dynamic elastic and automated nature of the AWS cloud Further Reading • AWS Elastic Beanstalk Documentation • AWS OpsWorks Documentation • Bootstrapping Applications via AWS CloudFormation whitepaper • Using Chef with AWS CloudFormation • Integrating AWS CloudFormation with Puppet
|
General
|
consultant
|
Best Practices
|
Maximizing_Value_with_AWS
|
ArchivedMaximizing Value with AWS Achieve Total Cost of Operation Benefits Using Cloud Computing February 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Create a Culture of Cost Management 2 Driving Cost Optimization 2 Total Cost of Operation 4 Start with an Understanding of Current Costs 4 Total Cost of Migration 5 Select the Right Plan for Specific Workloads 6 Employ Best Practices 7 Determine TopLine Business Metrics 8 Stay on Top of Instance Utilization 8 Distribute Daily Spending Updates 8 Every Engineer Can Be a Cost Engineer 9 Build Automation into Services 9 Implement a Reservation Process 10 Conclusion 10 Contributors 10 Archived Abstract Amazon Web Services (AWS) provides rapid access to flexible and low cost IT resources With cloud computing public sector organizations no longer need to make large upfront investments in hardware or spend time and money on managing infrastructure The goal of this whitepaper is to help you gain insight into some of the financial considerations of operating a cloud IT environment and learn how to maximize the overall value of your decision to adopt AWS ArchivedAmazon Web Services – Maximizing Value with AWS Page 1 Introduction A core reason organizations adopt a cloud IT infrastructure is to save money The traditional approach of analyzing Total Cost of Ownership no longer applies when you move to the cloud Cloud services provide the opportunity for you to use only what you need and pay only for what you use We refer to this new paradigm as the Total Cost of Operation You can use Total Cost of Operation (TCO) analysis methodologies to compare the costs of owning a traditional data center with the costs of operating your environment using AWS Cloud services Eliminate Upfront Sunk Costs Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure ( CapEx ) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large upfront sunk costs (costs that have already been incurred and can’t be recovered) If you are using the cloud you can return CapEx to the general fund and invest in activities that better serve your constituents AWS helps lower customer costs through its “pay only for what you use” pricing model To get started it is critical to understand how to measure value improve the economics of a migration project manage migration costs and expectations through largescale IT transformations and optimize the cost of operation Launch an Amazon EC2 Instanc e for Free The AWS Free Tier lets you gain free hands on experience with AWS products and services AWS Free Tier includes 750 hours of Linux and Windows t2micro instances each month for one year To stay within the Free Tier use only EC2 Micro instance s View AWS Free Tier Details » ArchivedAmazon Web Services – Maximizing Value with AWS Page 2 Create a Culture of Cost Management All teams can help manage costs and cost optimization should be everyone’s responsibility There are many variables that affect cost with different levers that can be pulled to drive operational excellence By using resources like the AWS Trusted Advisor dashboard and the AWS Billing Cost Explorer tool you can get realtime feedback on costs and usage that puts you on the road to operational excellence Put data in the hands of everyone – This reduces the feedback loop between the information/data and the action that is required to correct usage and sizing issues Enact policies and evangelize – Define and implement best practices to drive operational excellence Spend time training – Educate staff on the items that affect cost and the steps they can take to eliminate waste Create incentives for good behavior – Have friendly competitions across teams to encoura ge cost efficiencies throughout the organization To achieve true success cost optimization must be come a cultural norm in your organization Get everyone involved Encourage everyone to track their cost optimization daily so they can establish a habit of efficiency and see the daily impact over time of their cost savings Although everyone shares the ownership of cost optimization someone should be tasked with cost optimization as a primary responsibility Typically this is someone from either t he finance or IT department who is responsible for ensuring that cost controls are monitored so that business goals can be met The “cost optimization engineer” makes sure that the organization is positioned to derive optimal value out of the decision to adopt AWS Driving Cost Optimization By moving to the consumptionbased model of the cloud you can increase innovation with in the organization However one of the biggest challenges of the consumptionbased model is the lack of predictability ArchivedAmazon Web Services – Maximizing Value with AWS Page 3 You need to balance agility and innovation against cost As multiple teams spin up instances to test new ideas it is important to control and optimize AWS spending as cloud usage increases Don’t target cost savings as the end goal Instead optimize spending by focus ing on business growth opportunities that can result from innovative ideas The following table contrasts the traditional funding model against the cloud funding model Funding Model Characteristics Traditional Data Center A few big purchase decisions are made b y a few people every few years Typically o verprovision ed as a result of planning up front for spikes in usage Cloud Decentrali zed spending power Small decisions made by a lot of people Resources are spun up and down as new services are designed and then decommissioned Cost ramifications felt by the organization as a whole are closely monitored and tracked Give stakeholders access to your spending fundamentals The data is there Share it By using dashboards you can quickly highlight spending habits across your teams Actively manage workloads Turn services on and off as needed rather than runn ing them 24/ 7 Eliminate surprises Provide visibility into costs by making dashboard review a daily habit Make cost optimization a joint effort Have “spenders” (those spinning up resources) work closely with “watchers” (finance and leadership who can track to business goals) Allocate charges (or show departmental usage) to organizations actually using services This provides insight into each group’s impact on business goals Savings Know who uses services and how they use services To select the best rate evaluate pricing options that best meet the workload Tie spending to business metrics Determine what gets measured track usage and identify areas for improvement ArchivedAmazon Web Services – Maximizing Value with AWS Page 4 Use innovative approaches to optimize spend Consider policies such as “default off” for test and dev environments as opposed to 24/7 or even “on during business hours” Total Cost of Operation A pay asyougo model reduces investments in large capital expenditures In addition you can reduce the operating expense (OpEx) costs involved with the management and maintenance of data This frees up budget allowing you to quickly act on innovative initiatives that can’t be easily pursued when managing CapEx A clear understanding of your current costs is an important first step of a cloud migration journey This provides a baseline for defining the migration model that delivers optimal cost efficiency Our online total cost of ownership calculators allow you to estimate cost savings when using AWS These calculators provide a detailed set of reports that you can use in executive presentations The calculators also give you the option to modify assumptions so you can best meet your business needs Ready to find out how much you could be saving in the AWS Cloud? Take a look at the AWS Total Cost of Ownership Calculator Start with an Understanding of Current Costs Evaluate the following when calculating your onpremises computing costs: Labor How much do you spend on maintaining your environment? Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack? Capacity How do you plan for capacity? What is the cost of over provisioning for peak capacity? What if you need less capacity? Anticipating next year? ArchivedAmazon Web Services – Maximizing Value with AWS Page 5 Availability/Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data centers last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N (parallel redundancy) power? If not what happens when you have a power issue to your rack? Servers What is your average server utilization? How much do you overprovision for peak load? What is the cost of overprovisioning? Space Will you run out of data center space? When is your lease up? Total Cost of Migration To achieve the maximum benefits of the AWS Cloud it is important to understand and plan for the financial costs associated with migrating workloads to AWS While there isn’t yet a simple calculation for the total cost of migration (TCM) it is possible to estimate the cost and duration of the migration phase based on the experiences of others Some of the inputs for TCM include the following : IT staff will need to acquire new skills New business processes will need to be defined Existing business processes will need to be modified Cost of discovery and migration tooling needs to be calculated Duplicate environments will need to run until one is decommissioned Penalties could be incurred for breaking data center colocation or licensing agreements AWS uses the term migration bubble to describe the time and cost of moving applications and infrastructure from onpremises data centers to the AWS Cloud Altho ugh the cloud can provide significant savings certain costs may increase as you move into the migration bubble It is important to understand the costs associated with migration so that you can work to shrink the size of the migration bubble and accomplish the migration in a quick and sustainable manner ArchivedAmazon Web Services – Maximizing Value with AWS Page 6 Figure 1: Migration bubble To realize cost savings it is important to plan your migration to coincide with hardware retirement license and maintenance expiration and other opportunities to be frugal with your resources In addition the savings and cost avoidance associated with a full allin migration to AWS can help you fund the migration bubble You can even shorten the duration of the migration by applying more resources when appropriate For more information read the AWS Cloud Adoption Framework whitepaper Select the Right Plan for Specific Workloads Depending on your needs you can choose among three different ways to pay for Amazon Elastic Compute Cloud (EC2) instances: OnDemand Reserved Instances and Spot Instances You can also pay for Dedicated Hosts that provide you with EC2 instance capacity on physical servers dedicated for your use ArchivedAmazon Web Services – Maximizing Value with AWS Page 7 Purchasing Options Description Recommended for OnDemand Instances Pay for compute capacity by the hour with no long term commitments or upfront payment s Increase or decrease compute capacity depending on the demands of your application Only pay the specified hourly rate for the instances you use Users that want the low cost and flexibility of Amazon EC2 without any upfront payment or long term commitment Applications with short term spiky or unpredictable workloads that cannot be interrupted Applications being developed on AWS the first time Reserved Instances Can provide significant savings compared to using On Demand instances Sunk cost but the longer term commitment delivers a lower hourly rate Applications that have been in use for years and that you plan to continue to use Applications with steady state or predictable usage Applications that require reserved capacity Users who want to make upfront payments to further reduce their total computing costs Spot Instances Provide the ability to purchase compute capacity with no upfront commitment and lower hourly rates Allow you to specify the maximum hourly price that yo u are willing to pay to run a particular instance type Applications that have flexible start and end times Applications that are only feasible at very low compute prices Users with urgent computing needs for large amounts of additional capacity Dedi cated Hosts Physical EC2 server s with instance capacity fully dedicated for your use Help reduce costs by using existing server bound software licenses Can provide up to a 70% discount compared to the On Demand price Users who want to save money by using their own per socket or per core software in Amazon EC2 Users who deploy instances using configurations that help address corporate compliance and regulatory requirements Learn more about Amazon EC2 Instance Purchasing Options Employ Best Practices As your organization transitions to the cloud and you pilot new cloud initiatives be careful to avoid common pitfalls The best practices presented below can help you ArchivedAmazon Web Services – Maximizing Value with AWS Page 8 Determine TopLine Business Metrics To fully benefit from the cloud it is important to map business goals to specific metrics so that you can evaluate where changes need to be made Define the metrics that provide the most us eful data to track your service such as user subscriber customer access API calls and page views Dashboards are an excellent source of information and provide instant feedback on how services are delivering against specific goal s Stay on Top of Instance Utilization Oversight is an excellent practice to make sure that you are not overspending Monitoring tools provide visibility control and optimization Post DevOps use dashboards to monitor how services are used as well as your current spending profile If your monthly bill goes up make sure it is for the right reason (business growth) and not the wrong reason (waste) Choose a cadence and regularly measure results for services that have moved to the cloud Use tools that track performance and usage to reduce cost overruns It only takes five minutes to resize – up or down – to ensure that the service is providing the desired performance level Keep track of running instances Optimize the size of servers and adjust as needed rather than overprovisioning from the start If an instance is underutilized determine if you still need the instance if it can be shut down or if it needs to be resized As AWS introduces new technology find and then upgrade your legacy instances so that you can lower costs This can provide substantial savings over time Distribute Daily Spending Updates Make usage reviews a daily habit for all team members Provide weekly reporting to elevate visibility and drive accountability across large complex organization s Have teams review bills associated with their projects to identify ways to optimize for costs during dev/test as well as production And to create an ArchivedAmazon Web Services – Maximizing Value with AWS Page 9 atmosphere of friendly competition create a leaderboard that highlights teams with the best cost efficiencies Every Engineer Can Be a Cost Engineer Engineers should design code so that instances only spin up when needed and spin down when not in use There is no need to have AWS services running 24/ 7 if they are only used during standard work hours Turn off underutilized instances that you discover using dashboards and reports Innovate Spin up instances to test new ideas If the ideas work keep the instance for further refinement If not spin it down Build sizing into architecture Use tagging to help with cost allocation Tagging allows you to track the users of particular instances optimize usage and bill back or show charges by department or user Schedule dev/test Eliminate waste of resources not in use Eliminate Waste Default = Off is a good best practice Build Automation into Services Automation can accelerate the migration process Automate process es so that they turn off when not in use to eliminate waste Automate alerts to show when thresholds have been exceeded Configuration management With automation every machine defined in code spins up or down as needed to drive performance and cost optimization Set alerts on old snapshots oversized resources and unattached volumes and then automate and rebalance for optimal sizing Eliminate troubleshooting If an instance goes down spin up a new one Stop wasting time on unproductive activities ArchivedAmazon Web Services – Maximizing Value with AWS Page 10 Implement a Reservation Process Appoint someone to own the reservation process (typically a finance person) Buy on a regular schedule but continually track usage and modify reservations as need ed This can result in big savings over time See How to Purchase Reserved Instances for more information Conclusion Moving business applications to the AWS Cloud helps organizations simplify infrastructure management deploy new services faster provide greater availability and lower costs Having a clear understanding of your existing infrastructure and migration costs and then projecting your savings will help you calculate payback time project ROI and maximize the value your organization gains from migrating to AWS AWS delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep professional services and support organizations robust training programs and an ecosystem that is tens ofthousands of partners strong AWS can help you move faster and do more Contributors The following individuals and organizations contributed to this document: Blake Chism Practice Manager AWS Public Sector SalesVar Carina Veksler Public Sector Solutions AWS Public Sector SalesVar
|
General
|
consultant
|
Best Practices
|
Microservices_on_AWS
|
ArchivedImplementing Microservice s on AWS First Published December 1 2016 Updated Novembe r 9 2021 This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/ microservicesonaws/microservicesonawspdfArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not pa rt of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 5 Microservices architecture on AWS 6 User interface 6 Microservices 7 Data store 9 Reducing operational complexity 10 API implementation 11 Serverless microservices 12 Disaster recovery 14 Deploying Lambda based applications 15 Distributed systems components 16 Service discovery 16 Distributed data management 18 Config uration management 21 Asynchronous communication and lightweight messaging 21 Distributed monitoring 26 Chattiness 33 Auditing 34 Resources 37 Conclusion 38 Document Revisions 39 Contributors 39 ArchivedAbstract Microservices are an architectural and organizational approach to software development created to speed up deployment cycles foster innovation and ownership improve maintainability and scalability of software applications and scale organizations deliver ing software and services by using an agile approach that helps teams work independently With a microservices approach software is composed of small services that communicate over well defined application programming interface s (APIs ) that can be deploye d independently These services are owned by small autonomous teams This agile approach is key to successfully scale your organization Three common patterns have been observe d when AWS customers build microservices: API driven event driven and data str eaming This whitepaper introduce s all three approaches and summarize s the common characteristics of microservices discuss es the main challenges of building microservices and describe s how product teams can use Amazon Web Services (AWS) to overcome these challenges Due to the rather involved nature of various topics discussed in this whitepaper including data store asynchronous communication and service discovery the reader is encouraged to consider specific requirements and use cases of their applications in addition to the provided guidance prior to making architectural choices ArchivedAmazon Web Services Implementing Microservices on AWS 5 Introduction Microservices architectures are not a completely new approach to software engineering but rather a combination of various successful and proven concepts such as: • Agile software development • Service oriented architectures • APIfirst design • Continuous integration/ continuous delivery (CI/CD) In many cases design patterns of the Twelve Factor App are used for microservices This whitepaper first describe s different aspects of a highly scalable fault tolerant microservices architecture (user interface microservices implementation and data store) and how to build it on AWS using container technologies It then recommend s the AWS services for implementing a typical serverless microservices architecture to reduce operational complexity Serverless is defined as an operational model by the following tenets: • No infrastructure to provision or manage • Automatically scaling by unit of consumption • Pay for value billing model • Builtin availability and fault tolerance Finally th is whitepaper covers the overall system and discusses the cross service aspects of a microservices architecture such as distributed monitoring and auditing data consistency and asynchronous communication This whitepaper only focus es on workloads running in the AWS Cloud It doesn’t cover hybrid scenarios or migration strategies For more information about migration refer to the Container Migrat ion Methodology whitepaper ArchivedAmazon Web Services Implementing Microservices on AWS 6 Microservices architecture on AWS Typical monolithic applications are built using different layers —a user interface (UI) layer a business layer and a persistence layer A central idea of a microservices architecture is to split functionalities into cohesive verticals —not by technological layers but by implementing a specifi c domain The following f igure depicts a referen ce architecture for a typical microservices application on AWS Typical microservices application on AWS User interface Modern web applications often use JavaScript frameworks to implement a single page application that communicates with a representational state transfer (REST) or RESTful ArchivedAmazon Web Services Implementing Microservices on AWS 7 API Static web content can be served using Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront Because clients of a microservice are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin latencies can be significantly reduced However microservices running close to each other don’t benefit from a content delivery network In some cases this approach might actually add additional latency A best practice is to implement other caching mechanisms to reduce chattiness and minimize latencies For more information refer to the Chattiness topic Microservices APIs are the front door of microservices which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces typically a REST ful web services API This API accepts and proces ses calls from clients and might implement functionality such as traffic management request filtering routing caching authentication and authorization Microservices implementation AWS has integrated building blocks that support the development of microservices Two popular approaches are using AWS Lambda and Docker containers with AWS Fargate With AWS Lambda you upload your code and let Lambda take care of everything required to run and scale the implementatio n to meet your actual demand curve with high availability No administration of infrastructure is needed Lambda supports several programming languages and can be invok ed from other AWS services or be called directly from any web or mobile application One of the biggest advantages of AWS Lambda is that you can move quickly: you can focus on your business logic because security and scaling are managed by AWS Lambda’s opinionated approach drives the scalable platform A common approach to reduce operational efforts for deployment is container based deployment Container technologies like Docker have increased in popularity in the last few years due to benefits like portability productivity and efficiency The learning curve with containers can be steep and you have to think about security fixes for your Docker images and monitoring Amazon Elastic Container Service (Amazon ECS ) and Amazon ArchivedAmazon Web Services Implementing Microservices on AWS 8 Elastic Kubernetes Service (Amazon EKS ) eliminate the need to install operate and scale your own cluster management infrastructure With API calls you can launch and stop Docker enabled applications query the complete state of your cluster and access many familiar features like security groups Load Balancing Amazon Elastic Block Store (Amazon EBS) volumes and AWS Identity and Access Management (IAM) roles AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS With Fargate you no longer have to worry about provisioning enough compute resources for your container applications Fargate can launch tens of thousands of containers and easily scale to run your most mission critical applications Amazon ECS supports container placement strategies and constraints to customize how Amazon ECS places and ends tasks A task placement constraint is a rule that is considered during task placement You can associate attributes which are essentially keyvalue pairs to your container instances and then use a constraint to pl ace tasks based on these attributes For example you can use constraints to place certain microservices based on instance type or instance capability such as GPU powered instances Amazon EKS runs up todate versions of the open source Kubernetes softwar e so you can use all the existing plugins and tooling from the Kubernetes community Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment whether running in on premises data centers or public clouds Amazon EKS integrates IAM with Kubernetes enabling you to register IAM entities with the native authentication system in Kubernetes There is no need to manually set up credentials for authenticating with the Kubernetes control plane The IAM integration enable s you to use IAM to directly authenticate with the control plane itself a nd provide fine granular access to the public endpoint of your Kubernetes control plane Docker images used in Amazon ECS and Amazon EKS can be stored in Amazon Elastic Container Registry (Amazon ECR ) Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry Continuous integration and continuous delivery (CI/C D) are best practice s and a vital part of a DevOps initiative that enables rapid software changes while maintaining system stability and security However this is out of scope for this whitepaper For m ore ArchivedAmazon Web Services Implementing Microservices on AWS 9 information refer to the Practicin g Continuous Integration and Continuous Delivery on AWS whitepaper Private links AWS PrivateLink is a highly available scalable technology that enables you to privately connect your virtual private cloud (VPC) to supported AWS services services hosted by other AWS accounts (VPC endpoi nt services) and supported AWS Marketplace partner services You do not require an internet gateway network address translation device public IP address AWS Direct Connect connection or VPN connection to communicate with the service Traffic between your VPC and the service does not leave the Amazon network Private links are a great way to increase the isolation and security of microservices architecture A microservice for example could be deployed in a totally separate VPC fronted by a load balancer and exposed to other microservices through an AWS PrivateLink endpoint With this setup using AWS PrivateLink the network traffic to and from the microservice never traverses the public internet One use case for such isolation includes regulatory compliance for services handling sensitive data such as PCI HIPPA and EU/US Privacy Shield Additionally AWS PrivateLink allows connecting microservices across different accounts and Amazon VPCs with no need for firewall rules path definitions or route tables; simplifying network management Utilizing PrivateLink software as a service (SaaS ) providers and ISVs can offer their microservices based solutions with complete operational isolation and secure access as well Data store The data store is used to persist data needed by the microservices Popular stores for session data are in memory caches such as Memcached or Redis AWS offers both technologies as part of the managed Amazon ElastiCache service Putting a cache between application servers and a d atabase is a common mechanism for reducing the read load on the database which in turn may enable resources to be used to support more writes Caches can also improve latency Relational databases are still very popular to store structured data and business objects AWS offers six database engines (Microsoft SQL Server Oracle MySQL ArchivedAmazon Web Services Implementing Microservices on AWS 10 MariaDB PostgreSQL and Amazon Aurora ) as managed services through Amazon Relational Database Service (Amazon RDS ) Relational databases however are not designed for endless scale which can make it difficult and time intensive to apply techniques to support a high number of queries NoSQL databases have been designed to favor scalability performance and availability over the consistency of relational databases One important element of NoSQL databases is that they typically don’t enforce a strict schema Data is distributed over partitions that can be scaled horizontally and is retrieved using partition keys Because individual microservices are designed to do one thing well they typically have a simplified data model that might be well suited to NoSQL persistence It is important to understand that NoSQL databases have different access patterns than relational databases For example it is not possible to join tables If this is necessary the logic has to be implemented in the application You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB delivers single digit millisecond performance however there are cert ain use cases that require response times in microseconds Amazon DynamoDB Accelerator (DAX) p rovides caching capabilities for accessing data DynamoDB also offers an automatic scaling feature to dynamic ally adjust throughput capacity in response to actual traffic However there are cases where capacity planning is difficult or not possible because of large activity spikes of short duration in your application For such situations DynamoDB provides an on demand option which offers simple pay perrequest pricing DynamoDB on demand is capable of serving thousands of requests per second instantly without capacity planning Reducing operational complexity The architecture previously described in this whitepaper is already using managed services but Amazon Elastic Compute Cloud (Amazon EC2 ) instances still need to be managed The operational efforts needed to run maintain and monitor microservices can be further reduced by using a fully serverless architecture ArchivedAmazon Web Services Implementing Microservices on AWS 11 API implementation Architecting deploying monitoring continuously improving and maintaining an API can be a time consuming task Sometimes different versions of APIs need to be run to assure backward compatibility for all clients The different stages of the development cycle ( for example development testing and production) further multiply operational efforts Authorization is a critical feature for all APIs but it is us ually complex to build and involves repetitive work When an API is published and becomes successful the next challenge is to manage monitor and monetize the ecosystem of thirdparty developers utilizing the APIs Other important features and challenges include throttling requests to protect the backend services caching API responses handling request and response transformation and generating API definitions and documentation with tools such as Swagger Amazon API Gateway addresses those challenges and reduces the operational complexity of creating and maintaining RESTful APIs API Gateway allows you to create your APIs programmatically by importing Swagger definitions using either the AWS API or the AWS Management Console API Gateway serves as a front door to any web application running on Amazon EC2 Amazon ECS AWS Lambda or in any on premises environment Basically API Gateway allows you to run APIs without having to manage servers The following f igure illustrates how API Gateway handles API calls and interacts with other components Requests from mobile devices websites or other backend services are routed to the closest CloudFront Point of Presence to minimize latency and provide optimum user experience ArchivedAmazon Web Services Implementing Microservices on AWS 12 API Gateway call flow Serverless microservices “No server is easier to manage than no server ” — AWS re:Invent Getting rid of servers is a great way to eliminate operational complexity Lambda is tightly integrated with API Gateway The ability to make synchronous calls from API Gateway to Lambda enables the creation of fully serverless applications and is described in detail in the Amazon API Gateway Developer Guide The following figure shows the architecture of a serverless microservice with AWS Lambda where the complete service is built out of managed services which eliminates the architectural burden to design for scale and high availability and eliminates the operational efforts of running and monitoring the microservice’s underlying infrastructure ArchivedAmazon Web Services Implementing Microservices on AWS 13 Serverless microservice using AWS Lambda A similar implementation that is also based on serverless services is shown in the following figure In this architecture Docker containers are used with Fargate so it’s not necessary to care about the underlying infrastruc ture In addition to DynamoDB Amazon Aurora Serverless is used which is an ondemand autoscaling configuration for Aurora (MySQL compatible edition) where the database will automatically start up shut down and scale capacity up or down based on your application's needs ArchivedAmazon Web Services Implementing Microservices on AWS 14 Serverless microservice using Fargate Disaster recovery As previously mentioned in the introduction of this whitepaper typical microservices applications are implemented using the Twelve Factor Application patterns The Processes section states that “Twelve factor processes are stateless and share nothing Any data that needs to persist must be sto red in a stateful backing service typically a database” For a typical microservices architecture this means that the main focus for disaster recovery should be on the downstream services that maintain the state of the application For example t hese can be file systems databases or queues for example When creating a disaster recovery strategy organizations most commonly plan for the recovery time objective and recovery point objective Recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service This objective determines what is considered an acceptable time window when service is unavailable and is defined by the organization ArchivedAmazon Web Services Implementing Microservices on AWS 15 Recovery point objective is the maximum acceptable amount of time since the last data recovery point This objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization For more information refer to the Disaster Recovery of Workloads on AWS: Recovery in the Cloud whitepaper High availability This section take s a closer l ook at high availability for different compute options Amazon EKS runs Kubernetes control and data plane instances across multiple Availability Zones to ensure high availability Amazon EKS automatically detects and replaces unhealthy control plane instan ces and it provides automated version upgrades and patching for them This control plane consists of at least two API server nodes and three etcd nodes that run across three Availability Zones within a region Amazon EKS uses the architecture of AWS Regio ns to maintain high availability Amazon ECR hosts images in a highly available and high performance architecture enabling you to reliably deploy images for container applications across Availability Zones Amazon ECR works with Amazon EKS Amazon ECS and AWS Lambda simplifying development to production workflow Amazon ECS is a regional service that simplifies running containers in a highly available manner across multiple Availability Zones within a n AWS Region Amazon ECS includes multiple scheduling strategies that place containers across your clusters based on your resource needs (for example CPU or RAM) and availability requirements AWS Lambda runs your function in multiple Availability Zones to ensure that it is available to process events in cas e of a service interruption in a single zone If you configure your function to connect to a virtual private cloud ( VPC) in your account specify subnets in multiple Availability Zones to ensure high availability Deploying Lambda based applications You can use AWS CloudFormation to define deploy and configure serverless applications ArchivedAmazon Web Services Implementing Microservices on AWS 16 The AWS Serverless Application M odel (AWS SAM ) is a convenient way to define serverless applications AWS SAM is natively supported by CloudFormation and defines a simplified syntax for expressing serverless resources To deploy your application specify the resources you need as part of your application along with their associated permissions policies in a CloudFormation template package your deployment artifacts and deploy the template Based on AWS SAM SAM Local is an AWS Command Line Interface tool that provides an environm ent for you to develop test and analyze your serverless applications locally before uploading them to the Lambda runtime You can use SAM Local to create a local testing environment that simulates the AWS runtime environment Distributed systems componen ts After looking at how AWS can solve challenges related to individual microservices the focus moves to on cross service challenges such as service discovery data consistency asynchronous communication and distributed monitoring and auditing Service discovery One of the primary challenges with microservice architecture s is enabl ing services to discover and interact with each other The distributed characteristics of microservice architectures not only make it harder for services to communicate but also presents other challenges such as checking the health of those systems and announcing when new applications become available You also must decide how and where to store meta information such as configuration data that can be used by applicat ions In this section several techniques for performing service discovery on AWS for microservices based architectures are explored DNS based service discovery Amazon ECS now includes integrated service discovery that enables your containerized services to discover and connect with each other Previously to ensure that services were able to discover and connect with each other you had to configure and run your own service discovery system based on Amazon Route 53 AWS Lambda and ECS event stream s or connect every service to a load balancer ArchivedAmazon Web Services Implementing Microservices on AWS 17 Amazon ECS creates and manages a registry of service names using the Route 53 Auto Naming API Names are automatically mapped to a set of DNS records so that you can refer to a service by name in your code and write DNS queries to have the name resolve to the service’s endpoint at runtime You can specify health check conditions in a service's task definition and Amazon ECS ensures that only healthy service endpoints are returned by a service lookup In addition you can also use unified service discovery for services managed by Kubernetes To enable this integration A WS contributed to the External DNS project a Kubernetes incubator project Another option is to use the capabilities of AWS Cloud Map AWS Cloud Map extends the capabilities of the Auto Naming APIs by providing a service registry for resources such as Internet Protocols ( IPs) Uniform Resource Locators ( URLs ) and Amazon Resource Names ( ARNs ) and offering an APIbased service discovery mechanism with a faster change propagation and the ability to use attributes to narrow down the set of discovered resources Existing Route 53 Auto Naming resources are upgraded automatically to AWS Cloud Map Third party software A different approach to implementing service discovery is using third party software such as HashiCorp Consul etcd or Netflix Eureka All three examples are distributed reliable keyvalue stores For HashiCorp Consul there is an AWS Quick Start that sets up a flexible scalable AWS Cloud environment an d launches HashiCorp Consul automatically into a configuration of your choice Service meshes In an advanced microservices architecture the actual application can be composed of hundreds or even thousands of services Often the most complex part of the application is not the actual services themselves but the communication between those services Service meshes are an additional layer for handling interservice communication which is responsible for monit oring and controlling traffic in microservice s architectures This enables tasks like service discovery to be completely handled by this layer Typically a service mesh is split into a data plane and a control plane The data plane consists of a set of intelligent proxies that are deployed with the application code as a ArchivedAmazon Web Services Implementing Microservices on AWS 18 special sidecar proxy that intercepts all network communication between microservices The control plane is responsible for communicating with the proxies Service meshes are transpare nt which means that application developers don’t have to be aware of this additional layer and don’t have to make changes to existing application code AWS App Mesh is a service mesh that provides applicati onlevel networking to enable your services to communicate with each other across multiple types of compute infrastructure App Mesh standardizes how your services communicate giving you complete visibility and ensuring high availability for your applicat ions You can use App Mesh with existing or new microservices running on Amazon EC2 Fargate Amazon ECS Amazon EKS and self managed Kubernetes on AWS App Mesh can monitor and control communications for microservices running across clusters orchestration systems or VPCs as a single application without any code changes Distributed data management Monolithic applications are typically backed by a large relational database which defines a single data model common to all application components In a microservices approach such a central database would prevent the goal of building decentralized and independent components Each microservice component should have its own data persistence layer Distributed data management however rais es new challenges As a consequence of the CAP theorem distributed microservice architectures inherently trade off consistency for performance and need to embrace eventual consistency In a distributed system business transactions can span multiple microservices Because they cannot use a single ACID transaction you can end up with partial executions In this case we wou ld need some control logic to redo the already processed transactions For this purpose t he distributed Saga pattern is commonly used In the case of a failed business transaction Saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions AWS Step Functions make it easy to implement a Saga execution coordinator as shown in the following figure ArchivedAmazon Web Services Implementing Microservices on AWS 19 Saga execution coordinator Building a centralized store of critical reference data that is curated by core data management tools and procedures provides a means for microservices to synchronize their critical data and possibly roll back state Using AWS Lambda with scheduled Amazo n CloudWatch Events you can build a simple cleanup and deduplication mechanism It’s very common for state changes to affect more than a single microservice In such cases event sourcing has proven to be a useful pattern The core idea behind event sourcing is to represent and persist every application change as an event record Instead of persisting applicatio n state data is stored as a stream of events Database transaction logging and version control systems are two well known examples for event sourcing Event sourcing has a couple of benefits: state can be determined and reconstructed for any point in time It naturally produces a persistent audit trail and also facilitates debugging In the context of microservices architectures event sourcing enables decoupling different parts of an application by using a publish and subscribe pattern and it feeds the s ame event data into different data models for separate microservices Event sourcing is frequently used in conjunction with the Command Query Responsibility Segregation (CQRS) pattern to decouple read from write workloads and optimize both for performance scalability and security In traditional data management systems commands and queries are run against the same data repository The following figure shows how the event sourcing patter n can be implemented on AWS Amazon Kinesis Data Streams serves as the main component of the central event store which captures application changes as events and persists them on ArchivedAmazon Web Services Implementing Microservices on AWS 20 Amazon S3 The figure depicts three different microservices composed of API Gateway AWS Lambda and DynamoDB The arrows indicate the flow of the events: when Microservice 1 experiences an event state change it publishes an event by writing a message into Kinesis Data Streams All microservices run their own Kinesis Data Streams application in AWS Lambda which reads a copy of the message filters it based on relevancy for the microservice and possibly forwards it for further processing If your function re turns an error Lambda retries the batch until processing succeeds or the data expires To avoid stalled shards you can configure the event source mapping to retry with a smaller batch size limit the number of retries or discard records that are too old To retain discarded events you can configure the event source mapping to send details about failed batches to an Amazon Simple Queue Service (SQS ) queue or Amazon Simple Notification Service (SNS) topic Event sourcing pattern on AWS Amazon S3 durably stores all events across all microservices and is the single source of truth when it comes to debugging recovering application state or auditing application changes There are two primary reasons why records may be delivered more than one time to your Kinesis Data Streams application: producer retries and consumer retries Your application must anticipate and appropriately handle processing individual records multiple times ArchivedAmazon Web Services Implementing Microservices on AWS 21 Configuration management In a typical microservices architecture with dozens of different services each service needs access to several downstream services and infrastructure components that expose data to the service Examples could be message queues databases and other micros ervices One of the key challenges is to configure each service in a consistent way to provide information about the connection to downstream services and infrastructure In addition the configuration should also contain information about the environment in which the service is operating and restarting the application to use new configuration data shouldn’t be necessary The third principle of the Twelve Factor App patterns covers this topic: “ The twelve factor app stores config in environment variables (often shortened to env vars or env)” For Amazon ECS environment variables can be passed to the container by using the environment container definition parameter which maps to the env option to docker run Environment variables can be passed to your containers in bulk by using the environme ntFiles container definition parameter to list one or more files containing the environment variables The file must be hosted in Amazon S3 In AWS Lambda the runtime makes environment variables available to your code and sets additional environment varia bles that contain information about the function and invocation request For Amazon EKS you can define environment variables in the env field of the configuration manifest of the corresponding pod A different way to use env variables is to use a ConfigMa p Asynchronous communication and lightweight messaging Communication in traditional monolithic applications is straightforward —one part of the application uses method calls or an internal event distribution mechanism to communicate with the other parts If the same application is implemented using decoupled microservices the communication between different parts of the application must be implemented using network communication REST based communication The HTTP/S protocol is the most popular way to implement synchronous communication between microservices In most cases RESTful APIs use HTTP as a ArchivedAmazon Web Services Implementing Microservices on AWS 22 transport layer The REST architectural style relies on stateless communication uniform interfaces and standard methods With API Gateway you can create an API that acts as a “front door” for applications to access data business logic or functionality from your backend services API developers can create APIs that access AWS or other web services as well as data stored in the AWS Cloud An API object defined with the API Gateway service is a group of resources and methods A resource is a typed object within the domain of an API and may have associated a data model or relationships to other resources Each resource can be configured to respond to one or more methods that is standard HTTP verbs such as GET POST or PUT REST APIs can be deployed to different stages and versioned as well as cloned to new versions API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Asynchronous messaging and event passing Message passing is a n additional pattern used to implement communication between microservices Services communicate by exchanging messages by a queue One major benefit of this communication style is that it’s not necessary to have a service discovery and services are loosely couple d Synchronous systems are tightly coupled which means a problem in a synchronous downstream dependency has immediate impact on the upstream callers Retries from upstream callers can quickly fan out and amplify problems Depending on specific requirements like protocols AWS offers different services which help to implement this pattern One possible implementation uses a combination of Amazon Simple Queue Service (Amazon SQS ) and Amazon Simple Notification Service (Amazon SNS) Both services work closely together Amazon SNS enable s applications to send messages to multiple subscribers through a push mechanism By using Amazon SNS and Amazon SQS together one message can be delivered to multiple consumers The following figure demonstrates the integration of Amazon SNS and Amazon SQS ArchivedAmazon Web Services Implementing Microservices on AWS 23 Message bus pattern on AWS When you sub scribe an SQS queue to an SNS topic you can publish a message to the topic and Amazon SNS sends a message to the subscribed SQS queue The message contains subject and message published to the topic along with metadata information in JSON format Another option for building event driven architectures with event sources spanning internal applications third party SaaS applications and AWS services at scale is Amazon EventBridge A fully managed event bus service EventBridge receives events from disparate sources identifies a target based on a routing rule and delivers near realtime data to that target including AWS Lambda Amazon SNS and Amazon Kinesis Streams among others An inbound event can also be customized by input transformer prior to delivery To develop event driven applications sig nificantly faster EventBridge schema registries collect and organize schemas including schemas for all events generated by AWS services Customers can also d efine custom schemas or use an infer schema option to discover schemas automatically In balance however a potential trade off for all th ese features is a relatively higher latency value for EventBridge delivery Also the default throughput and quotas for EventBridge may require an increase through a support request based on use case A different implementation strategy is based on Amazon MQ which can be used if existing software is using open standard APIs and protocols for messaging including JMS NMS AMQP STOMP MQTT and WebSocket Amazon SQS exposes a custom ArchivedAmazon Web Services Implementing Microservices on AWS 24 API which means if you have an existing application that you want to migrate from—for example an onpremises environment to AWS —code changes are necessary With Amazon MQ t his is not necessary in many cases Amazon MQ manages the administration and maintenance of ActiveMQ a popular open source message broker The underlying infrastructure is automatically provisioned for high availability and message durability to support the reliability of your applications Orchestration and state management The distributed character of microservices makes it challenging to orchestrate workflows when multiple microservices are involved Developers might be tempted to add orchestra tion code into their services directly This should be avoided because it introduces tighter coupling and makes it harder to quickly replace individual services You can use AWS Step Functions to build applications from individual components that each perform a discrete function Step Fu nctions provides a state machine that hides the complexities of service orchestration such as error handling serialization and parallelization This lets you scale and change applications quickly while avoiding additional coordination code inside servic es Step Functions is a reliable way to coordinate components and step through the functions of your application Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps This makes it easier to build and run distributed services Step Functions automatically starts and tracks each step and retries when there are errors so your application executes in order and as expected Step Functions logs the state of each step so when something goes wrong you can diagnose and debug problems quickly You can change and add steps without even writing code to evolve your application and innovate faster Step Functions is part of the AWS serverless platform and supports orchestration of Lambda functions as well as applications based on compute resources such as Amazon EC2 Amazon EKS and Amazon ECS and additional services like Amazon SageMaker and AWS Glue Step Functions manages the operations and underlying infrastructure for you to help ensure that your application is available at any scale ArchivedAmazon Web Services Implementing Microservices on AWS 25 To build workflows Step Functions uses the Amazon States Language Workflows can contain sequential or parallel steps as well as branching steps The following figure shows an example workflow for a microservices architecture combining sequential and parallel steps Invoking such a workflow can be done either through the Step Functions API or with API Gateway An example of a microservices workflow invoked by Step Functions ArchivedAmazon Web Services Implementing Microservices on AWS 26 Distributed monitoring A microservices architecture consists of many different distributed parts that have to be monitored You can use Amazon CloudWatch to collect and track metrics centralize and monitor log files set alarms and automatically react to changes in your AWS environment CloudWatch can monitor AWS resources such as Amazon EC2 instances DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate Moni toring You can use CloudWatch to gain system wide visibility into resource utilization application performance and operational health CloudWatch provides a reliable scalable and flexible monitoring solution that you can start using within minutes You no longer need to set up manage and scale your own monitoring systems and infrastructure In a microservices architecture the capability of monitoring custom metrics using CloudWatch is an additional benefit because developers can decide which metrics should be collected for each service In addition dynamic scaling can be implemented based on custom metrics In addition to Amazon Cloudwat ch you can also use CloudWatch Container Insights to collect aggregate and summari ze metrics and logs from your containeri zed applications and microservices CloudWatch Container Insights automatically collects metrics for many resources such as CPU m emory disk and network and aggregate as CloudWatch metrics at the cluster node pod task and service level Using CloudWatch Container Insights you can gain access to CloudWatch Container Insights dashboard metrics It also provides diagnostic inform ation such as container restart failures to help you isolate issues and resolve them quickly You can also set CloudWatch alarms on metrics that Container Insights collects Container Insights is available for Amazon ECS Amazon EKS and Kubernetes platforms on Amazon EC2 Amazon ECS support includes support for Fargate Another popular option especially for Amazon EKS is to use Prometheus Prometheus is an open source monitoring and alerting toolkit that is often used in combination with Grafana to visualize the collected metrics Many Kubernetes components store metrics at /metrics and Prometheus can scrape these metrics at a regular interval ArchivedAmazon Web Services Implementing Microservices on AWS 27 Amazon Managed Service for Prometheus (AMP) is a Prometheus compatible monitoring service that enables you to monitor containerized applica tions at scale With AMP you can use the open source Prometheus query language (PromQL) to monitor the performance of containerized workloads without having to manage the underlying infrastructure required to manage the ingestion storage and querying of operational metrics You can collect Prometheus metrics from Amazon EKS and Amazon ECS environments using AWS Distro for OpenTelemetry or Prometheus servers as collection agents AMP is often used in combination with Amazon Managed Service for Grafana (A MG) AMG makes it easy to query visualize alert on and understand your metrics no matter where they are stored With AMG you can analy ze your metrics logs and traces without having to provision servers configure and update software or do the heavy lifting involved in securing and scaling Grafana in production Centralizing logs Consistent logging is critical for troubleshooting and identifying issues Microservices enable teams to ship many more releases than ever before and encourage engineering teams to run experiments on new features in production Understanding customer impact is crucial to gradually improving an application By default m ost AWS services centralize th eir log files The primary destinations for log files on AWS are Amazon S3 and Amazon CloudWatch Logs For applications running on Amazon EC2 instances a da emon is available to send log files to CloudWatch Logs Lambda functions natively send their log output to CloudWatch Logs and Amazon ECS includes support for the awslogs log driver that enables the centralization of container logs to CloudWatch Logs For Amazon EKS either Fluent Bit or Fluentd can forward logs from the individual instances in the cluster to a centralized logging CloudWatch Logs where they are combined for higher level reporting using Amazon OpenSearch Service and Kibana Because of its smaller footprint and performance advantages Fluent Bit is recommended instead of Fluent d The following figure illustrates the logging capa bilities of some of the services Teams are then able to search and analyze these logs using tools like Amazon OpenSearch Service and Kibana Amazon Athena can be used to run a one time query against centralized log files in Amazon S3 ArchivedAmazon Web Services Implementing Microservices on AWS 28 Logging capabilities of AWS services Distributed tracing In many cases a set of microservices works together to handle a request Imagine a complex system consisting of tens of microservices in which an error occurs in one of the services in the call chain Even if every microservice is logging properly and logs are consolidated in a central system it can be difficult to find all relevant log messages The central idea of AWS X Ray is the use of correlation IDs which are unique identifiers attached to all requests and messages related to a specific event chain The trace ID is added to HTTP requests in specific tracing headers named XAmznTraceId when the request hits the first XRay integrated service (for example Application Load Balancer or API Gateway) and included in the response Through the X Ray SDK any microservice can read but can also add or updat e this header XRay works with Amazon EC2 Amazon ECS AWS Lambda and AWS Elastic Beanstalk You can use X Ray with applications written in Java Nodejs and NET that are deployed on these services ArchivedAmazon Web Services Implementing Microservices on AWS 29 XRay service map Epsagon is fully managed SaaS that includes tracing for all AWS services third party APIs ( through HTTP calls) and other common services such as Redis Kafka and Elastic The Epsagon service includes monitoring capabilities alerting to the most common services and payload visibility into each and every call your code is making AWS Distro for OpenTelemetry is a secure production ready AWS supported distribution of the OpenTelemetry project Part of the Cloud Native Computing Foundation AWS Distro for OpenTelemetry provides open source APIs libraries and agents to collect distributed traces and metrics for application monitoring With AWS Distro for OpenTelemetry you can instrument your applications just o ne time to send correlated metrics and traces to multiple AWS and partner monitoring solutions Use autoinstrumentation agents to collect traces without changing your code AWS Distro for OpenTelemetry also collects metadata from your AWS resources and managed services to correlate application performance data with underlying infrastructure data reducing the mean time to problem resolution Use AWS Distro for OpenTelemetry to instrument your applications running on Amazon EC2 Amazon ECS Amazon EKS on Amazon EC2 Fargate and AWS Lambda as well as on premises ArchivedAmazon Web Services Implementing Microservices on AWS 30 Options for log analysis on AWS Searching analyzing and visualizing log data is an important aspect of understanding distributed systems Amazon CloudWatch Logs Insights enables you to explore analyze an d visualize your logs instantly This allows you to troubleshoot operational problems Another option for analyzing log files is to use Amazon OpenSearch Service together with Kibana Amazon OpenSearch Service can be used for full text search structured search analytics and all three in combination Kibana is an open source data visualization plugin that seamless ly integrates with the Amazon OpenSearch Service The following figure demonstrates log analysis with Amazon OpenSearch Service and Kibana CloudWatch Logs can be configured to stream log entries to Amazon OpenSearch Service in near real time through a CloudWatch Logs subscription Kibana visualizes the data and exposes a convenient search interface to data stores in Amazon OpenSearch Service This solution can be used in combination with software like ElastAlert to implement an alerting system to send SNS notifications and emails create JIRA tickets and so forth if anomalies spikes or other patterns of interest are detected in the data ArchivedAmazon Web Services Implementing Microservices on AWS 31 Log analysis with Amazon OpenSearch Service and Kibana Another option for analyzing log files is to use Amazon Redshift with Amazon QuickSight QuickSight can be easily connected to AWS data services including Redshift Amazon RDS Aurora Amazon EMR DynamoDB Amazon S3 and Amazon Kinesis CloudWatch Logs can act as a centralized store for log data and in addition to only storing the data it is possible to stream log entries to Amazon Kinesis Data Firehose The following figure depicts a scenario where log entries are streamed from different sources to Redshift using CloudWatch Logs and Kinesis Data Firehose QuickSight uses the data stored in Redshift for analysis reporting and visualization ArchivedAmazon Web Services Implementing Microservices on AWS 32 Log analysis with Amazon Redshi ft and Amazon QuickSight The following f igure depicts a scenario of log analysis on Amazon S3 When the logs are stored in Amazon S3 buckets the log data can be loaded in different AWS data services such as Redshift or Amazon EMR to analyze the data stored in the log stream and find anomalies ArchivedAmazon Web Services Implementing Microservices on AWS 33 Log analysis on Amazon S3 Chattiness By breaking monolithic applications into small microservices the communication overhead increases because microservices have to talk to each other In many implementations REST over HTTP is used because it is a lightweight communication protocol but high message volumes can cause issues In some cases you might consider consolidating services that send many messages back and forth If you find yourself in a situation where you consolidate an increased number of services just to reduce chattiness you should review your problem domains and your domain model Protocols Earlier in this whitepaper in the section Asynchronous communication and lightweight messaging different possible protocols are discussed For microservices it is common to use protocols like HTTP Messages exchang ed by services can be encoded in different ways such as human readable formats like JSON or YAML or efficient binary formats such as Avro or Protocol Buffers ArchivedAmazon Web Services Implementing Microservices on AWS 34 Caching Caches are a great way to reduce latency and chattiness of microservices architectures Several caching layers are possible depending on the actual use case and bottlenecks Many microservice applications running on AWS use ElastiCache to reduce the volume of calls to other microservices by caching results locally API Gateway provides a bu ilt in caching layer to reduce the load on the backend servers In addition caching is also useful to reduce load from the data persistence layer The challenge for any caching mechanism is to find the right balance between a good cache hit rate and the timeliness and consistency of data Auditing Another challenge to address in microservices architectures which can potentially have hundreds of distributed services is ensuring visibility of user actions on each service and being able to get a good overall view across all services at an organizational level To help enforce security policies it is important to audit both resource access a nd activities that lead to system changes Changes must be tracked at the individual service level as well a s across services running on the wider system Typically changes occur frequently in microservices architectures which makes auditing changes even more important This section examines the key services and features within AWS that can help you audit your microservices architecture Audit trail AWS CloudTrail is a useful tool for tracking changes in microservices because it enables all API calls made in the AWS Cloud to be logged and sent to either CloudWa tch Logs in real time or to Amazon S3 within several minutes All user and automated system actions become searchable and can be analyzed for unexpected behavior company policy violations or debugging Information recorded includes a timestamp user and account information the service that was called the service action that was requested the IP address of the caller as well as request parameters and response elements CloudTrail allows the definition of multiple trails for the same account which enables different stakeholders such as security administrators software developers or IT ArchivedAmazon Web Services Implementing Microservices on AWS 35 auditors to create and manage their own trail If microservice teams have different AWS accounts it is possible to aggregate trails into a single S3 bucket The advantages of storing the audit trails in CloudWatch are that audit trail data is captured in real time and it is easy to reroute in formation to Amazon OpenSearch Service for search and visualization You can configure CloudTrail to log in to both Amazon S3 and CloudWatch Logs Events and realtime actions Certain changes in systems architectures must be responded to quickly and either action taken to remediate the situation or specific governance procedures to authorize the change must be initiated The integration of Amazon CloudWatch Events with CloudTrail allows it to generate events for all mutating API calls across all AWS services It is also possible to define custom events or generate events based on a fixed schedule When an event is fired and matches a defined rule a pre defined group of people in your organization can be immediately notified so that they can take the appropriate action If the required action can be automated the rule can automatically trigger a built in workflow or invoke a Lambda function to resolve the issue The following figure shows an environment where CloudTrail and CloudWatch Events work tog ether to address auditing and remediation requirements within a microservices architecture All microservices are being tracked by CloudTrail and the audit trail is stored in an Amazon S3 bucket CloudWatch Events becomes aware of operational changes as th ey occur CloudWatch Events responds to these operational changes and takes corrective action as necessary by sending messages to respond to the environment activating functions making changes and capturing state information CloudWatch Events sit on top of CloudTrail and triggers alerts when a specific change is made to your architecture ArchivedAmazon Web Services Implementing Microservices on AWS 36 Auditing and remediation Resource inventory and change management To maintain control over fast changing infrastructure configurations in an agile development envi ronment having a more automated managed approach to auditing and controlling your architecture is essential Although CloudTrail and CloudWatch Events are important building blocks to track and respond to infrastructure changes across microservices AWS Config rules enable a company to define security policies with specific rules to automatically detect track and alert you to policy violations The next example demonstrates how it is possible to detect inform and automatically react to non compliant configuration changes within your microservices architecture A member of the development team has made a change to the API Gateway for a microservice to allow the endpoint to accept inbound HTTP traffic rather than only allowing HTTPS requests Because this situation has been previously identified as a security compliance concern by the organization an AWS Config rule is already monitoring for this condition ArchivedAmazon Web Services Implementing Microservices on AWS 37 The rule identifies the change as a security violation and performs two actions: it creates a log of the detected change in an Amazon S3 bucket for auditing and it creates an SNS notification Amazon SNS is used for two purposes in our scenario: to send an email to a specified group to inform about the security violation and to add a message to an SQS queue Next the message is picked up and the compliant state is restored by changing the API Gateway configuration Detecting security violations with AWS Config Resources • AWS Architecture Center • AWS Whitepapers • AWS Architecture Monthly • AWS Architecture Blog • This Is My Architecture videos • AWS Answers • AWS Documentation ArchivedAmazon Web Services Implementing Microservices on AWS 38 Conclusion Microservices architecture is a distributed design approach intended to overcome the limitations of traditional monolithic architectures Microservices help to scale applications and organizations while improving cycle times However they also come with a couple of challenges that might add additional arc hitectural complexity and operational burden AWS offers a large portfolio of managed services that can help product teams build microservices architectures and minimize architectural and operational complexity This whitepaper guide d you through the relev ant AWS services and how to implement typical patterns such as service discovery or event sourcing natively with AWS services ArchivedAmazon Web Services Implementing Microservices on AWS 39 Document Revisions Date Description November 9 2021 Integration of Amazon EventBridge AWS OpenTelemetry AMP AMG Container Insights minor text changes August 1 2019 Minor text changes June 1 2019 Integration of Amazon EKS AWS Fargate Amazon MQ AWS PrivateLink AWS App Mesh AWS Cloud Map September 1 2017 Integration of AWS Step Functions AWS XRay and ECS event streams December 1 2016 First publication Contributors The following individuals contributed to this document: • Sascha Möllering Solutions Architecture AWS • Christian Müller Solutions Architecture AWS • Matthias Jung Solutions Architecture AWS • Peter Dalbhanjan Solutions Architecture AWS • Peter Chapman Solutions Architecture AWS • Christoph Kassen Solutions Architecture AWS ArchivedAmazon Web Services Implementing Microservices on AWS 40 • Umair Ishaq Solutions Architecture AWS • Rajiv Kumar Solutions Architecture AWS
|
General
|
consultant
|
Best Practices
|
Migrating_Applications_to_AWS_Guide_and_Best_Practices
|
Migrating Applications Running Relational Databases to AWS Best Practices Guide First published December 2016 Updated March 9 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Overview of Migrating Data Centric Applications to AWS 1 Migration Steps and Tools 2 Development Environment Setup Prerequisites 3 Step 1: Migration Assessment 4 Step 2: Schema Conversion 6 Step 3: Conversion of Embedded SQL and Application Code 10 Step 4: Data Migration 13 Step 5: Testing Converted Code 15 Step 6: Data Replication 16 Step 7: Deployment to AWS and Go Live 20 Best Practices 22 Schema Conversion Best Practices 22 Application Code Conversion Best Practices 23 Data Migration Best Practices 23 Data Replication Best Practices 24 Testing Best Practices 25 Deployment and Go Live Best Practices 25 PostDeplo yment Monitoring Best Practices 26 Conclusion 26 Document Revisions 27 About this Guide The AWS Schema Conversion Tool ( AWS SCT) and AWS Data Migration Service (AWS DMS) are essential tools used to migrate an on premises database to Amazon Relational Data base Service (Amazon RDS) Th is guide introduces you to the benefits and features of these tools and walk s you through the steps required to migrate a database to Amazon RDS Schema data and application code migration processes are discussed regardless of whether your target database is PostgreSQL MySQL Amazon Aurora MariaDB Oracle or SQL Server Amazon Web Services Migrating Applications Running Relational Databases to AWS 1 Introduction Customers worldwide increasingly look at the cloud as a way to address their growing needs to store process and analyze vast amounts of data Amazon Web Services (AWS ) provides a modern scalable secure and performant platform to address customer requi rements AWS makes it easy to develop applications deployed to the cloud using a combination of database application networking security compute and storage services One of the most time consuming tasks involved in moving an application to AWS is migrating the database schema and data to the cloud The AWS Schema Conversion Tool ( AWS SCT) and AWS Database Migration Service (AWS DMS) are invaluable tools to make this migration easier faster and less error prone Amazon Relational Database Service (Am azon RDS) is a managed service that makes it easier to set up operate and scale a relational database in the cloud It provides cost efficient resizable capacity for an industry standard relational database and manages common database administration tas ks The simplicity and ease of management of Amazon RDS appeals to many customers who want to take advantage of the disaster recovery high availability redundancy scalability and time saving benefits the cloud offers Amazon RDS currently supports the MySQL Amazon Aurora MariaDB PostgreSQL Oracle and Microsoft SQL Server database engines In this guide we discuss how to migrate applications using a relational database management system ( RDBMS ) such as Oracle or M icrosoft SQL Server onto an Amazo n RDS instance in the AWS Cloud using the AWS SCT and AWS DMS Th is guide cover s all major steps of application migration: database schema and data migration SQL code conversion and application code re platforming Overview of Migrating Data Centric Appl ications to AWS Migration is t he process of moving applications that were originally developed to run on premises and need to be remediated for Amazon RDS During the migration process a database application may be migrated between two databases of the sa me engine type (a homogen eous migration; for example Oracle Oracle SQL Server SQL Server etc) or between two databases that use different Amazon Web Services Migrating Applications Running Relational Databases to AWS 2 engine types (a heterogeneous migration; for example Oracle PostgreSQL SQL Server MySQL etc) In this guide we look at common migration scenarios regardless of the database engine and touch on specific issues related to certain examples of heterogeneous conversions Migration Steps and Tools Application migration to AWS involves the following steps rega rdless of the database engine: 1 Migration assessment analysis 2 Schema conversion to a target database platform 3 SQL statement and application code conversion 4 Data migration 5 Testing of converted database and application code 6 Setting up replication and failover scenarios for data migration to the target platform 7 Setting up monitoring for a new production environment and go live with the target environment Figure 1: Steps of application migration to AWS Each application is different and may require extra attention to one or more of these steps For example a typical application contains the majority of complex data logic in database stored procedures functions and so on Other applications are heavier on logic in the application such as ad hoc queries to support search functionality On average the percentage of time spent in each phase of the migration effort for a typical application breaks down as shown in Table 1 Amazon Web Services Migrating Applications Running Relational Databases to AWS 3 Table 1: Time spent in each migration phase Step Percentage of Overall Effort Migration Assessment 2% Schema Conversion 30% Embedded SQL and Application Code Conversion 15% Data Migration 5% Testing 45% Data Replication 3% Go Live 5% Note: Percentages for data migration and replication are based on man hours for configuration and do not include hours needed for the initial load To make the migration process faster more predictable and cost effective AWS provides the following tools and methods to automate migration steps: • AWS Schema Conversion Tool (AWS SCT) – a desktop tool that automates conversion of database objects from different database migration systems (Oracle SQL Server MySQL PostgreSQL) to different RDS database targets (Amazon Aurora PostgreSQL Oracle MySQL SQL Server) This tool is invaluable during the Migration Assessment Schema Conversion and Application Code Conversion steps • AWS Database Migration Service (AWS DMS) – a service for data migration to and from AWS database targets AWS DMS can be used for a variety of replication tasks including continuous replication to offload reads from a primary production server for reporting or extract transform load (ETL); continuous replication for high availability; database consolidation; and temporary replication for data migrations In this guide we focus on the replication needed for data migrations This service reduces time and effort during the Data Migration and Dat a Replication Setup steps Development Environment Setup Prerequisites To prepare for the migration you must set up a development environment to use for the iterative migration process In most cases it is desirable to have the development Amazon Web Services Migrating Applications Running Relational Databases to AWS 4 environment mi rror the production environment Therefore this environment is likely on premises or running on an Amazon Elastic Compute Cloud (Amazon EC2) instance Download and install the AWS SCT on a server in the development environment If you are interested in changing database platforms the New Project Wizard can help you determine the m ost appropriate target platform for the source database See Step 1: Migration Assessment for more information Procure an Amazon RDS database instance to serve as the migration target and any necessary EC2 instances t o run migration specific utilities Step 1: Migration Assessment During Migration Assessment a team of system architects reviews the architecture of the existing application produces an assessment report that includes a network diagram with all the application layers identifies the application and database components that are not automatically migrated and estimates the effort for manual conversion work Although migration analysis tools exist to expedite the evaluation the bulk of the assessment is conducted by internal staff or with help from AWS Professional Services This effort is usuall y 2% of the whole migration effort One of the key tools in the assessment analysis is the Database Migration Assessment Report This report provid es important information about the conversion of the schema from your source database to your target RDS database instance More specifically the Assessment Report does the following: • Identifies schema objects (eg tables views stored procedures trig gers etc) in the source database and the actions that are required to convert them (Action Items ) to the target database (including fully automated conversion small changes like selection of data types or attributes of tables and rewrites of significant portions of the stored procedure) • Recommends the best target engine based on the source database and the features used • Recommends other AWS services that can substitute for missing features • Recommends unique features available in Amazon RDS th at can save the customer licensing and other costs Amazon Web Services Migrating Applicat ions Running Relational Databases to AWS 5 • Recommends re architecting for the cloud for example sharding a large database into multiple Amazon RDS instances such as sharding by customer or tenant sharding by geography or sharding by partition k ey Report Sections The database migration assessment report includes three main sections —executive summary conversion statistics graph conversion action items Executive Summary The executive summary provides key migration metrics and helps you choose th e best target database engine for your particular application Conversion Statistics Graph The conversion statistics graph visualizes the schema objects and number of conversion issues (and their complexity) required in the migration project Figure 2: Graph of conversion statistics Conversion Action Items Conversion action items are presented in a detailed list with recommendations and their references in the database code Amazon Web Services Migrating Applications Running Relational Databases to AWS 6 The database migration assessment report shows conversion action items with three levels of complexity: Simple task that requires less than 1 hour to complete Medium task that requires 1 to 4 hours to complete Significant task that require s 4 or more hours to complete Using the detailed report provided by the AWS SCT skilled architects can provide a much more precise estimate for the efforts required to complete migration of the database schema code For more information about how to confi gure and run the database migration assessment report see Creating a Database Migration Assessment Report All results of the assessment report calculations and the summary of conversion action items are saved inside the AWS SCT This data is useful for the schema conversion step of the overall data migration Tips • Before running the assessment report you can restrict the database objects to evaluate by selecting or clearing the desired nodes in the source database tree • After running the initial assessment report save the file as a PDF Then open the file in a PDF viewer to view the entire database migration assessment report You can navigate the assessment report more easily if you convert it to a Microsoft Word document and use Word’s Table of Contents Navigation pane Step 2: Schema Conversion The Schema Conversion step consists of translating t he data definition language (DDL) for tables partitions and other database storage objects from the syntax and features of the source database to the syntax and features of the target database Schema conversion in the AWS SCT is a two step process: 1 Convert the schema 2 Apply the schema to the target database AWS SCT also converts procedural application code in triggers stored procedures and functions from feature rich languages (eg PLSQL T SQL) to the simpler procedural languages of MySQL and P ostgreSQL Schema conversion typically accounts for 30% of the whole migration effort Amazon Web Services Migrating Applications Running Relational Databases to AWS 7 The AWS SCT automatically creates DDL scripts for as many database objects on the target platform as possible For the remaining database objects the conversion action items describe why the object cannot be converted automatically and the manual steps required to convert the object to the target platform References to articles that discuss the recommended solution on the target platform are included when available The translated DDL for database objects is also stored in the AWS SCT project file — both the DDL that is generated automatically by the AWS SCT and any custom or manual DDL for objects that could not convert automatically The AWS SCT can also generate a DDL s cript file per object; this may come in handy for source code version control purposes You have complete control over when the DDL is applied to the target database For example for a smaller database you can run the Convert Schema command to automatically generate DDL for as many objects as possible then write code to handle manual conversion action items and lastly apply all of the DDL to create all database objects at once For a larger database that takes weeks or months to convert it can be advantageous to generate the target database objects by executing the DDL selectively to create objects in the target database as needed The Step 6: Data Replication section discuss es how you can also speed u p the data migration process by applying secondary indexes and constraints as a separate step after the initial data load By selecting or clearing objects from the target database tree you can save DDL scripts separately for tables and their correspondi ng foreign keys and secondary indexes You can then use these scripts to generate tables migrate data to those tables without performance slowdown and then apply secondary indexes and foreign keys after the data is loaded After the database migration assessment report is created the AWS SCT offers two views of the project: main view and assessment report view Tips for Navigating the AWS SCT in the Assessment Report View See Figure 3 and corresponding callouts in Table 2 for tips on navigating the assessment report view Amazon Web Services Migrating Applications Running Relational Databases to AWS 8 Figure 3: AWS SCT in the assessment report view Table 2: AWS SCT in assessment report view callouts Callout Description 1 Select a code object from the source database tree on the left to view the source code DDL and mappings to create the object in the target database Note: Source code for tables is not displayed in the AWS SCT; however the DDL to create tables in the target database is displayed The AWS SCT displays both source and target DDL for other database objects 2 Click the chevron ( ) next to an issue or double click the issue message to expand the list of affected objects Select the affected object to locate it in the source and target database trees and view or edit the DDL script Source database objects with an associated conversion action item are indicate d with an exclamation icon: 3 When viewing the source SQL for objects the AWS SCT highlights the lines of code that require manual intervention to convert to the target platform Hovering over or double clicking the highlighted source code displays the corresponding action item 4 The target SQL includes comments with the Issue # for action items to be resolved in the converted SQL code Amazon Web Services Migrating Applications Running Relational Databases to AWS 9 Schema Mapping Rules The AWS SCT allows you to create custom schema transformations and mapping rules to use during the conversion Schema mapping rules can standardize the target schema naming convention apply internal naming conventions correct existing issues in the source schema and so on Transformations are applied to the target database schema table or column DDL and currently include th e following: • Rename • Add prefix • Add suffix • Remove prefix • Remove suffix • Replace prefix • Replace suffix • Convert uppercase (not available for columns) • Convert lowercase (not available for columns) • Move to (tables only) • Change data type (columns only) New transformations and mapping rules are being added to the AWS SCT with each release to increase the robustness of this valuable feature For example Figure 4 depicts a schema mapping rule that has been applied to standardize a table name and correct a typo Notice the Source Name to Target Name mapping Amazon Web Services Migrating Applications Running Relational Databases to AWS 10 Figure 4: Schema mapping rule in AWS SCT You can create as many schema mappi ng rules as you need by choosing Settings and then Mapping Rules from the AWS SCT menu After schema mapping rules are created you can export them for use by AWS DMS during the Data Migration step Schema mapping rules are exported in JavaScript Object N otation (JSON) format The Step 4: Data Migration section examine s how AWS DMS uses this mapping Tips • Before applying individual SQL objects to the target carefully examine the SQL for the object to ensure that any dependent objects have already been created If an error occurs while applying an object to the target database check the error log for details To find the location of the error log from the AWS SCT menu choose Settings and then choose Global Settings Step 3: Conversion of Embedded SQL and Application Code After you convert the database schema the next step is to address any custom scripts with embedded SQL statements (eg ETL scripts reports etc) and the application code so that they work with the new target database This includes rewriting portions of application code written in Java C# C++ Perl Python etc that relate to JDBC/ODBC driver usage establishing connections data retrieval and iteration AWS SCT scan s a folder containing ap plication code extract s embedded SQL statements convert s as many as possible automatically and flag s the remaining statements for manual Amazon Web Services Migrating Applications Running Relational Databases to AWS 11 conversion actions Converting embedded SQL in application code typically accounts for 15% of the whole migration ef fort Some applications are more reliant on database objects such as stored procedures while other applications use more embedded SQL for database queries In either case these two efforts combined typically account for around 45% or almost half of th e migration effort The workflow for application code conversion is similar to the workflow for the database migration: 1 Run an assessment report to understand the level of effort required to convert the application code to the target platform 2 Analyze the code to extract embedded SQL statements 3 Allow the AWS SCT to automatically convert as much code as possible 4 Work through the remaining conversion Action Items manually 5 Save code changes The AWS SCT uses a two step process to convert applica tion code: 1 Extract SQL statements from the surrounding application code 2 Convert SQL statements An application conversion project is a subproject of a database migration project One Database Migration Project can include one or more application conversio n subprojects; for example there may be a front end GUI application conversion an ETL application conversion and a reporting application conversion All three applications can be attached to the parent database migration project and converted in the AWS SCT The AWS SCT can also standardize parameters in parameterized SQL statements to use named or positional styles or keep parameters as they are In the following example the original application source code used the named (:name) style and positio nal (?) style has been selected for the application conversion Notice that AWS SCT replaced the named parameter :id with a positional ? during conversion Amazon Web Services Migrating Applications Running Relational Databases to AWS 12 Figure 5: AWS SCT replaced named style with positional style The applicat ion conversion workspace makes it easy to view and modify embedded SQL code and track changes that are yet to be made Parsed SQL scripts and snippets appear in the bottom pane alongside their converted code Selecting one of these parsed scripts highlight s it in the application code so you can view the context and the parsed script appear s in the lower left pane as shown in Figure 6 Figure 6: Selecting a parsed script highlights it in the application code The embedded SQL conversion process consists of the following iterative steps: 1 Analyze the selected code folder to extract embedded SQL 2 Convert the SQL to the target script If the AWS SCT is able to convert the script automatically it appear s in the lower right pane Any manual conversion code can also be entered here 3 Apply the converted SQL to the source code base swapping out the original snippet for the newly c onverted snippet Amazon Web Services Migrating Applications Running Relational Databases to AWS 13 4 Save the changes to the source code A backup of the original source code is saved to your AWS SCT working directory with an extension of old 5 Click the green checkmark to the right of the Parsed SQL Script to validate the Target SQL scr ipt against the target database Tips • AWS SCT can only convert or make recommendations for the SQL statements that it was able to extract The application assessment report contains a SQL Extraction Actions tab This tab lists conversion action items where AWS SCT detected SQL statements but was not able to accurately extract and parse them Drill down through these issues to identify application code that must be manually evaluated by an application developer and converted manually if needed • Drill into t he issues on either the SQL Extraction Actions or the SQL Conversion Actions tab to locate the file and line number of the conversion item then double click the occurrence to view the extracted SQL Step 4: Data Migration After the schema and application code are successfully converted to the target database platform it is time to migrate data from the source database to the target database You can easily accomplish this by using AWS DMS After the data is migrated you can perform testing on the new schema and application Because much of the data mapping and transformatio n work has already been done in AWS SCT and AWS DMS manages the complexities of the data migration for you configuring a new Data Migration Service is typically 5% of the whole migration effort Note: AWS SCT and AWS DMS can be used independently For example AWS DMS can be used to synchronize homogeneous databases between environments such as refreshing a test environment with production data However the tools are integrated so that the schema conversion and data migration steps can be used in any ord er Later sections of this guide cover specific scenarios of integrating these tools AWS DMS works by setting up a replication server that acts as a middleman between the source and target databases This instance is referred to as the AWS DMS replicatio n instance (Figure 7) AWS DMS migrates data between source and target Amazon Web Services Migrating Applications Running Relational Databases to AWS 14 instances and tracks which rows have been migrated and which rows have yet to be migrated Figure 7: AWS DMS replication instance AWS DMS provides a wizard to walk through the three main steps of getting the data migration service up and running: 1 Set up a replication instance 2 Define connections for the sourc e and target databases 3 Define data replication tasks To perform a database migration AWS DMS must be able to connect to the source and target databases and the replication instance AWS DMS will automatically create the replication instance in the speci fied Amazon Virtual Private Cloud ( Amazon VPC) The simplest database migration configuration is when the source and target databases are also AWS resources (Amazon EC2 or Amazon RDS) in the same VPC For more information see Setting Up a Network for Database Migration in the AWS Database Migration Service User Guide You can migrate data in two ways: • As a full load of existing data • As a full load of existing data followed by continuous replication of data changes to the target AWS DMS can be configured to drop and recreate the target tables or truncate existing data in the target tables before reloading da ta AWS DMS will automatically create the target table on the target database according to the defined schema mapping rules with primary keys and required unique indexes then migrate the data However AWS DMS doesn't create any other objects that are not required to efficiently migrate the data from the source For example it doesn't create secondary indexes non primary key constraints or data defaults or other database objects such as stored procedures views functions packages and so on Amazon Web Services Migrating Applications Running Relational Databases to AWS 15 This is where the AWS SCT feature of saving SQL scripts separately for various SQL objects can be used or these objects can be applied to the target database directly via the AWS SCT Apply to Database command after the initial load Data can be migrated asis (su ch as when the target schema is identical or compatible with the source schema) AWS DMS can use Schema Mapping Rules exported from the AWS SCT project or custom mapping rules can be defined in AWS DMS via JSON Fo r example the following JSON renames a table from tbl_departmnet to department and creates a mapping between these two tables { "rules": [ { "ruletype": "selection" "rule id": "1" "rulename": "1" "object locator": { "schemaname": "HumanResources" "tablename": "%" } "ruleaction": "include" } { "ruletype": "transformation" "rule id": "2" "rulename": "Rename tbl_Departmnet" "rule action": "rename" "ruletarget": "table" "object locator": { "schemaname": "HumanResources" "tablename": "tbl_Departmnet" } "value": "Department" } ] Tips For more information on AWS replication instance types and their capacities see Working with an AWS DMS Replication Instance Step 5: Testing Converted Code After schema and application code has been converted and the data successfully migrated onto the AWS platform thoroughly test the migrated application The focus of this testing is to ensure correct functional behavior on the new platform Although best practices vary it is generally accepted to aim for as much time in the testing phase as in the development phase which is about 45% of the overall migration effort The goal of testing shoul d be two fold: exercising critical functionality in the application and verifying that converted SQL objects are functioning as intended An ideal scenario Amazon Web Services Migrating Applications Running Relational Databases to AWS 16 is to load the same test dataset into the original source database load the converted version of th e same dataset into the target database and perform the same set of automated system tests in parallel on each system The outcome of the tests on the converted database should be functionally equivalent to the source Data rows affected by the tests shou ld also be examined independently for equivalency Analyzing the data independently from application functionality verif ies there are no data issues lurking in the target database that are not obvious in the user interface (UI) Step 6: Data Replication Although a one time full load of existing data is relatively simple to set up and run many production applications with large database backends cannot tolerate a downtime window long enough to migrate all the data in a full load For these databases AWS DMS can use a proprietary Change Data Capture (CDC) process to implement ongoing replication from the source database to the target database AWS DMS manages and monitors the ongoing replication process with minimal load on the source database without platf ormspecific technologies and without components that need to be installed on either the source or target Due to CDC’s ease ofuse setting up data replication typically accounts for 3% of the overall effort CDC offers two ways to implement ongoing repli cation: • Migrate existing data and replicate ongoing changes – implements ongoing replication by: a (Optional) Creating the target schema b Migrating existing data and caching changes to existing data as it is migrated c Applying those cached data changes until the database reaches a steady state d Lastly applying current data changes to the target as soon as they are received by the replication instance • Replicate data changes only – replicate data changes only (no schema) from a specified point in time This o ption is helpful when the target schema already exists and the initial data load is already completed For example using native export/import tools ETL or snapshots might be a more efficient method of loading the bulk data in some situations In this ca se AWS DMS can be used to replicate changes from when the bulk load process started to bring and keep the source and target databases in sync Amazon Web Services Migrating Applications Running Relational Databases to AWS 17 AWS DMS takes advantage of built in functionality of the source database platform to implement the proprietary C DC process on the replication instance This allows AWS DMS to manage process and monitor data replication with minimal impact to either the source or target databases The following sections describe the source platform features and configurations neede d by the DMS replication instance’s CDC process MS SQL Server Sources Replication Replication must be enabled on the source server and a distribution database that acts as its own distributor configured Transaction logs The source database must be in Full or Bulk Recovery Mode to enable transaction log backups Oracle Sources BinaryReader or LogMiner By default AWS DMS uses LogMiner to capture changes from the source instance For data migrations with a high volume of change and/or large object (LOB) data using the proprietary Binary Reader may offer some performance advantages ARCHIVELOG The source database must be in ARCHIVELOG mode Supplemental Logging Supplemental logging must be turned on in the source databas e and in all tables that are being migrated PostgreSQL Sources Write Ahead Logging (WAL) In order for AWS DMS to capture changes from a PostgreSQL database: • The wal_level must be set to logical • max_replication_slots must be >= 1 • max_wal_senders must be >= 1 Primary Key Tables to be included in CDC must have a primary key MySQL Sources Binary Logging Binary logging must be enabled on the source database Automatic backups Automatic backups must be enabled if the source is a MySQL Amazon Aurora or MariaDB Amazon RDS instance Amazon Web Services Migrating Applications Running Relational Data bases to AWS 18 SAP ASE (Sybase) Sources Replication Replication must be enabled on the source but RepAgent must be disabled MongoDB Oplog AWS DMS requires access to MongoDB oplog to enable ongoing replication IBM Db2 LUW Either one or bo th of the database configuration parameters LOGARCHMETH1 and LOGARCHMETH2 should be set to ON For additional information including prerequisites and security configurations for each source platform refer to the appropriate link in the Sources for Data Migration for AWS Database Migration Service section of the AWS Database Migration Service User Guide The basic setup of ongoing data replication is done in the Task configuration pane Table 3 describes the migration type options Table 3: Migration type options Migration type Description Migrate existing data Perform a one time migration from the source endpoint to the target endpoint Migrate existing data and replicate ongoing changes Perform a onetime migration from the source to the target and then continue replicating data changes from the source to the target Replicate data changes only Don't perform a one time migration but continue to replicate data changes from the source to the targe t Additional configurations for the data migration task are available in the Task settings pane (Figure 8 and Table 4) Amazon Web Services Migrating Applications Running Relational Databases to AWS 19 Figure 8: Data migration task settings Table 4: Task setting options Setting Description Target table preparation mode Do nothing If the tables already exist at the target they remain unaffected Otherwise AWS DMS creates new tables Drop tables on target AWS DMS drops the tables and creates new tables in their place Truncate AWS DMS leaves the tables and their metadata in place but removes the data from them Include LOB columns in replication Don’t include LOB columns AWS DMS ignores columns or fields that contain large objects (LOBs) Full LOB mode AWS DMS includes the complete LOB Limited LOB mode AWS DMS truncates each LOB to the size defined by Max LOB size (Limited LOB mode is faster than full LOB mode) Enable CloudWatch logs (check box) AWS DMS publishes detailed task information to CloudWatch Logs 1 2 3 7 8 Amazon Web Services Migrating Applications Running Relational Databases to AWS 20 Step 7: De ployment to AWS and Go Live Test the data migration of the production database to ensure that all data can be successfully migrated during the allocated cutover window Monitor the source and target databases to ensure that the initial data load is complet ed cached transactions are applied and data has reached a steady state before cutover You can also use the Enable Validation option available in the Task settings pane of AWS DMS ( Figure 8) If you select t his option AWS DMS validate s the data migration by comparing the data in the source and the target databases Design a simple rollback plan for the unlikely event that an unrecoverable error occurs during the Go Live window The AWS SCT and AWS DMS work together to preserve the original source database and application so the rollback plan will mainly consist of scripts to point connection strings back to the original source database Post Deployment Monitoring AWS DMS monitors the number of rows inserted deleted and updated as well as the number of DDL statements issued per table while a task is running You can view these statistics for the selected task on the Table Statistics pane of you r migration task In the list of migration ta sks in AWS DMS choose your Database migration task (Figure 9) Figure 9: List of database migration tasks On the detail page scroll to the Table Statistics pane (Figure 10) You can monitor the number of rows inserted deleted and updated as well as the number of DDL statements issued per table while a task is running Amazon Web Services Migrating Applications Running Relational Databases to AWS 21 Figure 10: Table statistics monitoring The most relevant metrics can be viewed for the selected task on the Migration task metrics pane (Figure 11) Figure 11: Relevant metrics for a task Additional metrics are available from the Amazon CloudWatch Logs dashboard accessible from the link on the Overview details pane or by navigating in the AWS Management Console to Services choosing CloudWatch and then choosing DMS If logging is enabled for the task review the Amazon CloudWatch Logs for any errors or warnings You can enable logging for a task during task creation by selecting Enable CloudWatch Logs in Task Settings (Figure 8) Amazon Web Services Migrating Applications Running Relational Databases to AWS 22 Best Practices This section presents best practices for each of the seven major steps of migrating applications to AWS Schema Conversion Best Practices • Save the Database Migration Assessment Report After running the initial database migration assessment report save it as a CSV and a PDF As conversion action items are completed they may no longer appear in the database migration assessment report if it is regenerated Saving the initial assessment report can serve as a valuable project management tool such as providing a history of conversion tasks and tracking the percentage of tasks completed The CSV version is helpful because it can be i mported into Excel for ease ofuse such as the ability to search filter and sort conversion tasks • For most conversions apply DDL to the target database in the following order to avoid dependency errors: a Sequences b Tables c Views d Procedures Functions should be applied to the target database in order of dependency For example a function might be referenced in a table column; therefore the function must be applied before the table to avoid a dependency error Another function might reference a table; therefore the table must be created first • Configure the AWS SCT with the memory performance settings you need Increasing memory speeds up the performance of your conversion but uses more memory resources on your desktop On a desktop with limi ted memory you can configure AWS SCT to use less memory resulting in a slower conversion You can change these settings by choosing Settings Global Settings and then Performance and Memory Amazon Web Services Migrating Applications Running Relational Databases to AWS 23 • Apply the additional schema that AWS SCT creates to the target database For most conversion projects AWS SCT create s an additional schema in the target database named aw_[source platform]_ext This schema contain s SQL objects to emulate features and functionality that are present in the source platform but not in the target platform For example when converting from Microsoft SQL Server to PostgreSQL the aws_sqlserver_ext schema contains sequence definitions to r eplace SQL Server identity columns Don’t forget to apply this additional schema to the target database as it will not have a direct mapping to a source object • Use source code version control to track changes to target objects (both database and applicat ion code) If you find bugs or data differences during testing or deployment the history of changes is useful for debugging Application Code Conversion Best Practices • After running the initial application assessment report save it as a CSV and a PDF As conversion tasks are completed they no longer appear in the application assessment report if it is regenerated The initial application assessment report serve s as a history of tasks completed throughout the entire application conversion effort The CSV file is also helpful because it can be imported into Excel for ease ofuse such as the ability to search filter and sort conversion tasks Data Migration Best Practices • Choose a replication instance class large enough to support your database size and transactional load By default AWS DMS loads eight tables at a time On a large replication server such as a dmsc4xlarge or larger instance you can improve performance by increasing the number of tables to load in parallel On a smaller replication se rver reduce the number of tables to load in parallel for improved performance • On the target database disable what isn’t needed Disable unnecessary triggers validation foreign keys and secondary indexes on the target databases if possible Disable u nnecessary jobs backups and logging on the target databases Amazon Web Services Migrating Applications Running Relational Databases to AWS 24 • Tables in the source database that do not participate in common transactions can be allocated to different tasks This allows multiple tasks to synchronize data for a single database migration thereby improving performance in some instances • Monitor performance of the source system to ensure it is able to handle the load of the database migration tasks Reducing the number of tasks and/or tables per task can reduce the load on the source system Using a synchronized replica mirror or other read only copy of the source database can also help reduce the load on the source system • Enable logging using Amazon CloudWatch Logs Troubleshooting AWS DMS errors without the full logging captured in Clou dWatch Logs can be difficult and time consuming (if not impossible) • If your source data contains Binary Large Objects (BLOBs) such as an image XML or other binary data loading of these objects can be optimized using Task Settings For more information see Task Settings for AWS Database Migration Service Tasks in the AWS Database Migration Service User Guide Data Replication Best Practices • Achieve best performance by not applying indexes or foreign keys to the target database during the initial load The initial load of existing data comprises inserts into the target database Therefore you can get the best performance during the initial load if the target databas e does not have indexes or foreign keys applied However after the initial load when cached data changes are applied indexes can be useful for locating rows to update or delete • Apply indexes and foreign keys to the target database before the applicati on is ready to go live • For ongoing replication (such as for high availability) enable the Multi AZ option on the replication instance The Multi AZ option provides high availability and failover support for the replication instance • Use the AWS API or A WS Command Line Interface (AWS CLI) for more advanced AWS DMS task settings The AWS API and/or AWS CLI offer more granular control over data replication tasks and additional settings not currently available in the AWS Management Console Amazon Web Services Migrating Applications Running Relational Databases to AWS 25 • Disable backups o n the target database during the full load for better performance Enable them during cutover • Wait until cutover to make your target RDS instance Multi AZ for better performance Testing Best Practices • Have a test environment where full regression tests o f the original application can be conducted The tests completed before conversion should work the same way for the converted database • In the absence of automated testing run “smoke” tests on the old and new applications comparing data values and UI fu nctionality to ensure like behavior • Apply standard practices for database driven software testing regardless of the migration process The converted application must be fully retested • Have sample test data that is used only for testing • Know your data logic and apply it to your test plans If you don’t have correct test data the tests might fail or not cover mission critical application functionality • Test using a dataset similar in size to the production dataset to expose performance bottlenecks such as missing or non performant indexes Deployment and Go Live Best Practices • Have a rollback plan in place should anything go wrong during the live migration Since the original database and application code are still in place and not touc hed by AWS SCT or AWS DMS this should be fairly straightforward • Test the deployment on a staging or pre production environment to ensure that all needed objects libraries code etc are included in the deployment and created in the correct order of de pendency (eg a sequence is created before the table that uses it) • Verify that AWS DMS has reached a steady state and all existing data has been replicated to the new server before cutting off access to the old application in preparation for the cutover • Verify that database maintenance jobs are in place such as backups and index maintenance Amazon Web Services Migrating Applications Running Relational Databases to AWS 26 • Turn on Multi AZ if required • Verify that monitoring is in place • AWS provides several services to make deployments easier and trouble free such as AWS CloudFormation AWS OpsWorks and AWS CodeDeploy These services are especially helpful for deploying and managing stacks involving multiple AWS resources that must interact with each other such as databases web servers load balancers IP addresses VPCs and so on These se rvices enable you to create reusable templates to ensure that environments are identical For example when setting up the first development environment you may complete some tasks manually either via the AWS Management Console AWS CLI PowerShell etc Instead of tracking these items manually to ensure they are created in the staging environment resources in the running development environment can be included in the template then the template can be used for setting up the staging and production envir onments Post Deployment Monitoring Best Practices • Create CloudWatch Logs alarms and notifications to monitor for unusual database activity and send alerts to notify production staff if the AWS instance is not performing well High CPU utilization disk latency and high RAM usage can be indicators of missing indexes or other performance bottlenecks • Monitor logs and exception reports for unusual activity and errors • Determine if there are additional platform specific metrics t o capture and monitor such as capturing locks from the pg_locks catalog table on the Amazon Redshift platform Amazon Redshift also allows viewing running queries from the AWS Management Console • Monitor instance health CloudWatch Logs provides more metr ics on an RDS instance than an EC2 instance and these may be sufficient for monitoring instance health For an EC2 instance consider installing a third party monitoring tool to provide additional metrics Conclusion The AWS Schema Conversion Tool (AWS SC T) and AWS Data Migration Service (AWS DMS) make the process of moving applications to the cloud much easier and Amazon Web Services Migrating Applications Run ning Relational Databases to AWS 27 faster than manual conversion alone Together they save many hours of development during the migration effort enabling you to reap the benefi ts of AWS more quickly Document Revisions Date Description March 9 2021 Reviewed for technical accuracy November 2019 Updated to reflect latest features and functionality December 2016 First publication
|
General
|
consultant
|
Best Practices
|
Migrating_AWS_Resources_to_a_New_Region
|
ArchivedMigrating AWS Resources to a New AWS Region July 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Abstract 5 Introduction 1 Scope of AWS Resources 1 AWS IAM and Security Considerations 1 Migrating Compute Resources 2 Migrating Amazon EC2 Instances 2 Considerations for Reserved Instances 9 Migrating Networking and Content Delivery Network Resources 10 Migrating Amazon Virtual Private Clo ud 11 Migrating AWS Direct Connect Links 13 Using Amazon Route 53 to Aid the Migration Process 13 Migrating Amazon CloudFront Distributions 15 Migrating Storage Resources 16 Migrating Amazon S3 Buckets 16 Migrating Amazon S3 Glacier Storage 19 Migrating Amazon Elastic File Syst em 20 Migrating AWS Storage Gateway 21 Migrating Database Resources 22 Migrating Amazon RDS Services 22 Migrating Amazon DynamoDB 24 Migrating Amazon SimpleDB 25 Migrating Amazon ElastiCache 26 Migrating Amazon Redshift 26 Migratin g Analytics Resources 27 Migrating Amazon Athena 27 Migrating Amazon EMR 28 ArchivedMigrating Amazon Elasticsearch Service 28 Migrating Application Services and Messaging Resources 28 Migrating Amazon SQS 29 Migrating Amazon SNS Topics 30 Migrating Amazon API G ateway 30 Migrating Deployment and Management Resources 31 Migrating with AWS CloudFormation 31 Capturing Environments by Using CloudFormer 32 API Implications 32 Updating Customer Scripts and Programs 33 Important Considerations 33 Conclusi ons 33 Contributors 33 Document Revisions 34 ArchivedAbstract This document is intended for experienced customers of Amazon Web Services who want to migrate existing resources to a new AWS Region You might want to migrate for a variety of reasons In particular if a new region becomes available that is closer to your user base you might want to locate various services geographically closer to those users This document is not intended to be a “step bystep” or “definitive” guide Rather it provides a variety of options and methods for migrat ing various services that you might require in a new region Archived Page 1 Introduction Amazon Web Services (AWS) provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world For many AWS services you can choose the region from which you want to deliver those service s Each region has multiple Availability Zones By using separate Availability Zones you can additionally protect your applications fr om the failure of a single location By using separate AWS Regions you can design your application to be closer to your customers and achieve lower latency and higher throughput AWS ha s designed the regions to be isolated from each other so that you can achieve greater fault tolerance and improved stability in your applications Scope of AWS Resources While most AWS services operate within a region the following services operate across all regions and require no migration: • AWS Identity and Access Management (AWS IAM) • AWS Management Console • Amazon CloudWatch Further as all services are accessible using API endp oints you do not necessarily need to migrate all components of your architecture to the new region depending on your application For example you can migrate Amazon Elastic Compute Cloud (Amazon EC2) instances but retain existing Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront configurations When planning a migration to a new region we recommend that y ou check what AWS products and services are available in that region An updated list of AWS product and service offerings by region is available here1 AWS IAM and Security Considerations AWS IAM enables you to securely control access to AWS services and resources for your users IAM users are created and managed within the scope of an AWS account rather than a particular region No migration of users or groups is required Archived Page 2 When migrating to a new region it is important to note any defined policy restrictio ns on IAM users For example Amazon Resource Names (ARNs) might restrict you to a specific region For more information see IAM Identifiers in the AWS Identity and Access User Guide 2 IAM is a core security service that enables you to add specific policies to control user access to AWS resources Some policies can affect : • Timeofday access (which can require consideration due to time zone differences) • Use of new originating IP addresses • Whether you need to use SSL connections • How users are authenticated • Whether you can use multi factor authentication (MFA) devices Because IAM underpins security we recommend that you careful ly review your security configuration policies procedures and practices before a region migration Migrating Compute Resources This section cover s the migration of compute services such as Amazon EC2 and other closely associated services for security storage load balancing and Auto Scaling Migrating Amazon EC2 Instances Amazon EC2 is a web service that provid es resizable compute capacity in the cloud It is designed to make web scale computing easier for developers Migrating an instance involves copying the data and images ensuring that the security groups and SSH keys are present and then restarting fresh instances SSH Keys AWS does not keep any of your user SSH private keys after they are generated These public keys are made available to Amazon EC2 instances when they are running (Under Linux operating systems these are normally copied into the relevan t user’s ~/ssh/authorized_keys file) Archived Page 3 Figure 1 Key pairs in the AWS Management Console You can retrieve a fingerprint of each key from the application programming interface (API) software development kit (SDK) command line interface (CLI) or the AWS Management Console SSH public keys are only stored per region AWS does not copy or synchronize your configured SSH keys between regions It is up to you to determine if you will use separate SSH keys per region or the same SSH keys in severa l regions Note: You can log in to an existing Linux instance in the source region obtain a copy of the public key (from ~/ssh/authorized_keys ) and import this public key into the target region It is important to know that Auto Scaling launch configurations and AWS CloudFormation templates might refer to SSH keys using the key pair name In these case s you must take care to either update any Auto Scaling launch confi guration or AWS CloudFormation template to use keys that are available in a new region or deploy the public key with the same key pair name to the new region For more information see AWS Security Credentials in the AWS General Reference 3 Archived Page 4 Security Groups Security groups in Amazon EC2 restrict ingress traffic (or in the case of virtual private cloud or VPC ingress and egress traffic ) to a group of EC2 instances Each rule in a security group can refer to the source (or in VPC the destination) by either a CIDR notation IPv4 address range ( abcd/x ) or by using the security group identifier ( sgXXXXXXXX ) Figure 2 Security group configuration in the AWS Management Console Each security group can exist within the scope of only one region The same name can exist in multiple regions but have different definitions of what traffic is permitted to pass Every instance being launched mu st be a member of a security group If a host is being started as part of an Auto Scaling launch configuration or an AWS CloudFormation template the required security group must exist (AWS CloudFormation templates might often define the security group to be created as part of the template ) It is vital that you review your configured security groups to ensure that the required level of network access restrictions is in place To export a copy of the definitions of existing security groups (using the command line tools) run the following command: ec2describe group –H –region <sourceregionname> > security_groupstxt Archived Page 5 For more information see Security Groups in the Amazon EC2 User Guide 4 Amazon Machine Images An Amazon Machine Image (AMI) is a special type of preconfigured operating system image used to create a virtual machine (an EC2 instance) within the Amazon EC2 environment Each AMI is assign ed an identifier of the form “ami XXXXXXXX” where ”X” is a hexadecimal value (0 9 AF) Figure 3 AMIs in the AWS Management Console Each AMI is unique per region AMIs do not span multiple regions However the same content of an AMI can be availabl e in other regions (for example Amazon Linux 201 609 or Windows Server 20 12 R2) Each region has its own unique AMI ID for its copy of this data You can create your own AMIs from running instances and use these as a starting point for launching additional instances These user created AMIs are assigned a unique AMI ID within the region AMI IDs are used within Auto Scaling launch configuration and AWS CloudFormation templates If you plan to use Auto Scaling or AWS CloudFormation you need to update the AMI ID references to match the ones that exist in the target region Migration of AMIs across regions is supported using the EC2 AMI Copy function 5 AMI Copy enables you to copy an AMI to as many regions as you want from the AWS Management Console the Amazon EC2 CLI or the Amazon EC2 API AMI Copy is available for AMIs bac ked by Amazon Elastic Block Store (EBS ) and instance store backed AMIs and is operating system agnostic Archived Page 6 Each copy of an AMI results in a new AMI with its own unique AMI ID Any changes made to the source AMI du ring or after a copy are not propagated to the new AMI as part of the AMI copy process You must recopy the AMI to the target regions to copy the changes made to the source AMI Note : Permissions and user defined tags applied to the source AMI are not copied to the new AMIs as part of the AMI copy process After the copy is complete you can apply any permissions and user defined tags to the new AMIs Amazon EBS Volumes Amazon EBS is a block storage volume that can be presented to an EC2 instance You can format a n EBS volume with a specific file system type such as NTFS Ext4 XFS etc EBS volumes can contain the operating system boot volum e or be used as an additional data drive (Windows) or mount point (Linux) You can migrate EBS volumes using the cross region EBS snapshot copy capability 6 This enables you to copy snapshots of EBS volumes between regions using either the AWS Management Console API call or command line EBS Snapshot Copy offers the following key capabilities : • The AWS Management Console shows you the progress of a snapshot copy in progress where you can check the percentage complete d • You can initiate multiple EBS Snapshot Copy commands simultaneously either by selecting and copying multiple snapshots to the same region or by copying a snapshot to multiple regions in parallel The in progress copies do not affect the performance of the associated EBS volumes • The console based interface is push based You log in to the source region and tell the console where you'd like the snapshot to end up The API and the command line are by contrast pull based You must run them within the target region The entire process takes place without the need to use external tools or perform any configuration H ere is a h ighlevel overview of the migration process: 1 Identify relevant EBS volumes to migrate (you can choose to use tagging to assist in identification) Archived Page 7 2 Identify which volumes can be copied with the application running and which require you to pause or shut down the application EBS Snapshot Copy accesses a snapshot of the primary volume rather than the volume itself Therefore you might need to shut down the application during the copy process to ensure the latest data is copied across 3 Create the necessary EBS snapshots and wait for their st atus to be “Complete” 4 Initiate the EBS Snapshot Copy feature using either the AWS Management C onsole API or CLI 5 Create EBS volumes at the target region by selecting the relevant snapshots and using the “create volume from snapshot” functionality Volu mes and Snapshots Amazon EBS volumes can currently be from 1 GB to 1 6 TB in size (in 1 GB increments ) They can be used with disk management tools such as Logical Volume Manager (LVM) or Windows Disk Manager to span or stripe across multiple block device s You can stripe multiple EBS volumes together to deliver higher performance storage volumes to applications Volumes that are in constant use might benefit from having a snapshot taken especially if there are multiple volumes being used in RAID 1 stripe or part of an LVM volume group Provisioned IOPS volumes are another way to increase EBS performance These volumes are designed to deliver predictable high performanc e for I/O intensive workloads such as databases To enable EC2 instances to fully use the IOPS provisioned on an EBS volume you can launch selected EC2 instance types as “EBS Optimized” instances Before a region migration we recommend that you check tha t these instances are supported in the Availability Zones in the target region For more information about for getting optimal performance from your EBS volumes see Amazon EBS Performance Tips in the Amazon EC2 User Guide7 Archived Page 8 Elastic IP Addresses Elastic IP addresses are assigned to an account from the pool of addresses for a given region As such an Elastic IP address cannot be migrated between regions We recommend you update the timetolive (TTL) value on your Domain Name System (DNS ) server that points to this Elastic IP address and reduce it to an amount that is a tolerable delay in DNS cache expire such as 300 seconds (five minutes) or less Any decrease in DNS TTL could result in an increase in DNS requests increase load on your current DNS service and affect charges from y our DNS service provider You can make DNS changes more optimally by taking a staged approach to TTL modifications For example: • The c urrent TTL for wwwexamplecom (which points to an Elastic IP address) is 86 400 seconds (one day) • Modify the TTL for wwwexamplecom to 300 seconds ( five minutes) and schedule work for two days’ time • Monitor the increase of DNS traffic in this period • At the start of the day of the sched uled work reduce the TTL for wwwexamplecom further Later optionally reduce the TTL more depending upon load on your DNS infrastructure (possibly 10 seconds) • 10 minutes after the last change update the A re cord to point to a new Elastic IP address in the new region • After a short period confirm that traffic is being adequately serviced and then increase the TTL back to five minutes (300 seconds) • After another period of operation return the TTL to normal level Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple EC2 instances You cannot migrate ELB to a new region Instead you must launch a new ELB service in the target region that contain s a new set of EC2 instances spanning the Availability Zones you want within the service group Archived Page 9 Before a region migration we recommend that you review the source and target Availability Zones to confirm that matching levels of zones exist In scenarios where you discover extra Availability Zones you might need to revise application load balancing and scalability This could lead to further assessments of CloudWatch alarms an d thresholds that are used for Auto Scaling group configuration Furthermore you must add associated SSL certificates on the old ELB service to the new ELB service You must also add h ealth check conditions to verify EC2 instance health tests Launch Conf igurations and Auto Scaling Groups Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define 8 You can view the current Auto Scaling and launch configuration definitions from the AWS Management C onsole Alternatively you can use the following commands to capture this information : asdescribe autoscalinggroups –H –region <sourceregionname> > autoscale_groupstxt asdescribe launchconfigs –H –region <sourceregionname> > launch_conf igstxt These extracted Auto Scaling groups and launch configuration settings reference AMIs security groups and SSH key pairs as they exist in the source region See the earlier sections on migrating these resources to the t arget region Then create new Auto Scaling groups and launch configurations in the target region using new AMI IDs and security groups For more information on Auto Scaling groups and launch configurations see Getting Started with Auto Scaling in the Auto Scaling User Guide 9 Considerations for Reserved Instances Many customers take advantage of greatly reduced pricing of Reserved Instances for Amazon EC2 Amazon Redshift Amazon Relational Database Service (Amazon RDS) and Amazon EMR Amazon EC2 Standard Reserved Instances (or reserved cache nodes) are assigned to a specific instance type in a specific region for a period of one or three years while Amazon EC2 Archived Page 10 Convertible Reserv ed Instances give you the flexibility to change the instance type Reserved Instances are available in three payment options : All Upfront Partial Upfront and No Upfront The upfront cost and per hour charges vary between these utilization levels as well as between different geographic regions When you buy Reserved Instances the larger the upfront payment the greater the discount To maximize your savings you can pay all upfront and receive the largest discount Partial upfront Reserved Instances offer lower discounts but give you the option to spend less upfront You can also choose to spend nothing upfront and receive a smaller discount allowing you to free up capital to spend in other projects If you purchased Reserved Instances for EC2 and want to migrate them to a different region we recommend that you first sell them in the Reserved Instances Marketplace As soon as they are sold the billing switches to the new buyer and you are no longer billed for the Reserved Instances The buyer then conti nues to pay the remainder of the term To get savings over On Demand Instances you can either buy Reserved Instances for a shorter term in the region where you are migrating from the Reserved Instance Marketplace or purchase directly from AWS Reserved Instance Marketplace makes it easy to “migrate” your billing to a new region For more detail ed information about how to buy and sell Reserved Instances see Buying in the Reserved Instance Marketplace 10 and Amazon EC2 Rese rved Instance Marketplace 11 We recommend that you carefully assess cost implications of the purchas e and sale of Reserved Instances before undertaking a migration to a new region Migrating Networking and Content Delivery Network Resources This section cover s the migration of network resources such as subnets route tables virtual private network s access control lists and Domain Name Systems Archived Page 11 Migrating Amazon Virtual Private Cloud Amazon Virtual Private Cloud (VPC) lets you provision a private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define When you create a VPC it exists within a region and spans all the Availability Zones in that region You cannot move or migrate it to a new region However you can create a new VPC in a target region and potentially use the same IP address ranges that the existing VPC uses You can list all VP Cs using the following command : aws ec2 describe vpcs A VPC consists of multiple components which you need to recreate in the target region • Subnets You must recreate the same subnets in the target VPC You can list all subnets in a VPC using the following command: aws ec2 describe subnets filters "Name=vpc idValues=vpc abcd1234" • DHCP option set If you have a customized DHCP option set you must recreate it in the target region You can get details of the DHCP option set using the following com mand: aws ec2 describe dhcpoptions • Internet gateways You must recreate internet gatewa ys in the target region You can do this using the following commands: aws ec2 create internet gateway aws ec2 attach internet gateway vpcid vpcabcd1234 The interne t gateway resource IDs returned by the previous command are used in the route tables Archived Page 12 • NAT gateway s You must recreate a NAT gateway in the VPC of the target region You can list all the NAT gateway s using the following command: aws ec2 describe natgateways • Route tables You must recreate the route tables in the target region You can list all route tables for a VPC using the following command: aws ec2 describe routetables filters "Name=vpc idValues=vpc abcd1234" Note : As the resourc e IDs for gateways (internet gateway NAT gateway etc) change in the target range be sure you use the new resource IDs • Security groups You must recreate the security groups in the target region You can list all security groups using the following command: aws ec2 describe security groups • Network access control lists (ACLs) You must recreate the network ACL if you have made changes to it You can list the network ACLs for a VPC using the following co mmand: aws ec2 describe networkacls filters "Name=vpc idValues=vpc 0f7ec66a" • Customer gateways: You must recreate the customer gateway in the target region You can list all the customer gateways using the following command: aws ec2 describe customer gateways • Virtual private gateway s You must recreate the virtual private gateway in the target region You can list all the virtual private gateways using the following command: Archived Page 13 aws ec2 describe vpngateways • VPN Details about creating a VPN with the target region can be found here12 Migrating AWS Direct Connect Links AWS Direct Connect is a service that links physical infrastructure to AWS services One or more fiber connections are provisioned in a Direct Connect location facility If you want to provision new link s in a new region you must request a new Direct Connect service and provision any tail circuits to their infrastructure Charges for Direct Connect vary per geographic location You can terminate e xisting connections at any time when they’re no longer required AWS has relationships with several different peering partners in each geographic region You can find an updated list of AWS Direct Connect Amazon Partners that can assist with service provisioning at http://awsamazoncom/directconnect/partners/ Using Amazon Route 53 to Aid t he Migration Process Amazon Route 53 is a highly available DNS service that is available from all AWS Regions and edge locations worldwide DNS can be very effective when managing a migration scenario as it can assist in gracefully migrating traffic from one location to another by routing traffic by single cutover or gradually By adding new DNS records for copy ing an application in the destination region you can test access to the application and choose whe n to cut over to the new site or region One approach is to use weighted resource record sets This functionality enables you to determine what percentage of traffic to route to each particular address when using the same DNS name For example use the fol lowing configuration to route all traffic to the existing region and none to the new region Archived Page 14 wwwmysitecom CNAME elbnamesourceregioncom 100 wwwmysitecom CNAME elbnamedestinationregioncom 0 When it is time to perform the migration the weighting on these records is flipped as follows wwwmysitecom CNAME elbnamesourceregioncom 0 wwwmysitecom CNAME elbnamedestinationregioncom 100 This causes all new DNS requests to resolve to the new region Note: Some clients might continue to use the old address if they have cached their DNS resolution if a long TTL exists or if a TTL update has not been honored Figure 4 Using Amazon Route 53 to facilitate region migration Archived Page 15 It is also possible to perform gradual cutover using varied weightings as long as the application supports a dual region operational model For more information see Working with Resource Record Sets in the Amazon Route 53 User Guide 13 Migrating Amazon CloudFront Distributions Amazon CloudFront is a content delivery service that operates from the numerous AWS edge locations worldwide CloudFront delivers customer data in configuration sets known as distribution s Each distribution has one configured origin but can have more as in the case of cache behaviors Each origin can be an S3 bucket or a web server including web servers running from within EC2 (in any AWS Region worldwide) To update an origin in the CloudFront console 1 Move your origin server or S3 bucket to the new region by referring to the relevant section of this document for EC2 instances or S3 buckets 2 In the CloudFront console select the distribution and then choose Distribution Settings 3 On the Origins tab choose the origin to edit (there can be only one) and then choose Edit 4 Update Origin Domain Name with the new server or bucket name Archived Page 16 5 Choose Yes Edit For more information see Listing Viewing and Updating CloudFront Distributions in the Amazon CloudFront Developer Guide 14 Migrating Storage Resources This section covers the migration of services used for object storage file storage and archiving Migrating Amazon S3 Buckets Amazon S3 provides a simple web services interface that you can use to store and retrieve any amount of data at any time from anywhere on the web When you create an S3 bucket it resides physically wi thin a single AWS Region Network latency can affect a ccess when the bucket is accessed from another remote region You should pay careful attention to any references to the S3 buckets and their geographic distribution as this can introduce latency To mi grate an S3 bucket you need to create a new S3 bucket in the region and copy the data to it The new bucket requires a universally unique name and cannot have the same name as the bucket in the source region If your goal is to preserve the bucket name wh en copying the bucket between accounts you need to perform an intermediate step In this case copy the data to an intermediary bucket first and then delet e the source bucket After the source bucket is deleted you must wait about 24 hours until the name becomes Archived Page 17 available again Then create the bucket with the old name in the new account and transfer the data using the same method as before For more information about Amazon S3 bucket naming rules see Bucket Restrictions and Limitations in the Amazon S3 User Guide 15 Virtual Hosting with Amazon S3 Buckets You might be hosting websites through the static website hosting feature of Amazon S3 For more information see Hosting a Static Website on Amazon S3 in the Amazon S3 User Guide 16 For simplicity and user friendliness customers often use a DNS CNAME alias for th eir hosted web content using Amazon S3 from a URL such as http://bucketnames3amazonawscom to http://mybucketnamecom Through a CNAME alias the specific Amazon S3 URL endpoint is abstracted from the web browser For more information see Virtual Hosting of Buckets in the Amazon S3 User Guide 17 When you migrate an S3 bucket that was previously used as a static website to a new AWS Region you need to preserve the bucket name when copying the bucket between regions First copy the data to an intermediary bucket and then delete the source bucket After the source bucket is deleted it might take some time before you can reuse th at name to create a new bucket in the destination Region After the bucket name becomes available create the bucket in the new Region with the old name and t hen t ransfer the data using the same method described previously For more information see How can I migrate my Amazon S3 bucket to another AWS Region? 18 Moving Objects Using the AWS Management Console The AWS Management Console gives you the ability to copy or move multiple objects between S3 buckets By manually selecting one or more objects and selecting Cut or Copy from the pop up menu you can paste or move these items into a target S3 bucket in another geographic region Archived Page 18 Figure 5 Copy an Amazon S3 object using the AWS Management Console Copying or Moving Objects Using Third Party Tools To copy or move Amazon S3 objects between buckets you can use a variety of thirdparty tools You can look for AWS Partner products by searching for “Storage ISV ” using the AWS Partner Solutions Finder 19 Copying Using the Amazon API and SDK You can copy or move Amazon S3 objects between buckets programmatically through the Amazon SDKs and APIs For more information about Amazon S3 object level operations see Operations on Objects in the Amazon S3 API Reference 20 To speed up the object copying process you can use the PUT Object Copy operation A PUT Object Copy operation performs a GET and then a PUT API call in a single operation which can copy to a destination bucket F or more information see PUT Object – Copy in the Amazon S3 API Reference 21 You can also use the S3DistCp tool with the Amazon EMR tool to efficiently copy large amounts of data from a n S3 bucket in the source region to an S3 bucket in the target region We recommend this for large buckets and to significantly decrease the overall migration time S3DistCp is an extension of the open source tool DistCp that is optimized to work with AWS particularly Amazon S3 S3DistCp uses a n EMR cluster to transfer the data You will incur additional charges for the EMR cluster You can reduce the cost of running the Archived Page 19 EMR cluster for copying the data by using Spot Instances as outlined here22 Find more details on S3DistCp here23 You can also use the Amazon S3 cross region replication feature to replicate data across AWS Regions With cross region replication every object uploaded to an S3 bucket is automatically replicated from the S3 bucket in the source region to a bucket in the target region Data that exists in the S3 bucket before you enable cross region replication is not replicated For migrating exis ting data you can write a script to update the underlying metadata or ACLs on the object in the source bucket which trigger s a replication to the destination bucket You can find more details on using cross region replication here24 Migrating Amazon S3 Glacier Storage Amazon S3 Glacier is the AWS deep archive storage service It is designed to handle large volumes of data that are infrequ ently accessed With Amazon S3 Glacier you have multiple options for retrieving data depending on the urgency of the requirement Options include Expedited Standard and Bulk retrieval You can find details on pricing options here25 For Standard retrieval Amazon S3 Glacier offers free retrieval up to 10 GB per month Retrieval of more than this amount of data incurs additional charges The process used to retrieve data f rom Amazon S3 Glacier depend s on the way the data was archived as follows Amazon S3 l ifecycle policy is used to tra nsition data from Amazon S3 to Amazon S3 Glacier Even though the storage class of these objects is GLACIER you can access them only via the Amazon S3 console or APIs 1 Use the Amazon S3 console or APIs to restore a temporary copy of an archived object to Amazon S3 Glacier Specify the number of days that you want the temporary copy to be available During this period you will incur storage charges for both Amazon S3 Glacier and for the temporary copy 2 Copy the S3 data from source region to target region using the steps in Migrating Amazon S3 Buckets in this whitepaper 3 Configure a n Amazon S3 lifecycle policy in the target region to transition the data from Amazon S3 to Amazon S3 Glacier Archived Page 20 4 Delete the data stored in Amazon S3 Glacier in the source region by updating the Amazon S3 lifecycle policy Amazon S3 Glacier APIs are used to store the data in archives in vaults 1 Initiate an archival retrieval job to request Amazon S3 Glacier to prepare an entire archive or a portion of the archive for subsequent down load 2 After the retrieval job completes download the bytes to a staging area If you are using Amazon S3 as your staging area and your archive is greater than 5 TB you need to use byte ranges to limit the output size to less than 5 TB Although Amazon S3 Glacier supports individual archives of up to 40 TB Amazon S3 has an object size limit of 5 TB 3 You can transfer the data from the staging area to Amazon S3 Glacier using Amazon S3 Glacier APIs Alternatively if you use Amazon S3 as your staging area in the source region you can use tools such as S3DistCp to copy the data to Amazon S3 in the target region and then use Amazon S3 Glacier APIs to recreate the archive in Amazo n S3 Glacier in the target region 4 Delete the temporary files created in the staging area and from Amazon S3 Glacier in the source region Migrating Amazon Elastic File System Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with EC2 instances in the AWS Cloud With Amazon EFS storage capacity is elastic It grows and shrinks automatically as you add and remove files so your applications have the storage they need when they need it Amazon EFS file systems c an automatically scale from gigabytes to petabytes of data without needing to provision storage Amazon EFS uses the NFSv41 protocol and is accessible from Linux based AMIs You have two options for migrat ing data stored in Amazon EFS from one region to another : • Copy the files from Amazon EFS to Amazon EBS If the data in Amazon EFS is more than what a single EBS volume (max imum size of 16 TB) can accommodate you might need to use thirdparty software to distribute the data across multiple EBS volumes Then you can migrate the EBS volumes using the cross region EBS snapshot copy capability and copy the files from EBS to EFS Archived Page 21 • Copy the files from EFS to S3 Then use the process described earlier for copying data in S3 buckets from source region to target region In the target region copy the files from S3 to EFS After confirming the successful migration make sure to delete EFS files in the source region and the temp resources (S3 and EBS) used in the transfer to avoid incurring charges for services you are not using Migrating AWS Storage Gateway The AWS St orage Gateway service helps you seamlessly integrate your existing on premises storage infrastructure and data with the AWS Cloud It uses industry standard storage protocols to connect existing storage applications and workflows to AWS Cloud storage servi ces for minimal process disruption It maintains frequently accessed data on premises to provide low latency performance while securely and durably storing data in Amazon S3 Amazon EBS or Amazon S3 Glacier After creating a gateway in the new region you can migrat e your data stored in Amazon S3 or Amazon EBS using the native migration capabilities of these services detailed elsewhere in this paper The approach you take to migrate Storage Gateway depend s on the interface you used in the source region : • File Interface Create a File Storage Gateway pointing to the target region 26 Create a n S3 bucket in the target region and copy the data from the S3 bucket in the source region to target region using the process defined earlier You can also enable S3 cross region replication to ensure that any updates to the S3 bucket in the sourc e region are automatically replicated to the target region Create a Storage Gateway file share on the S3 bucket in the target region to access your S3 files from your gateway 27 You can update the inventory of objects maintained and stored on the gateway by initiating the refresh 28 from the AWS Storage Gateway console or by using the RefreshCache operation in the API Reference 29 • Volume Interface Create a volume gateway pointing to the target region 30 Create an EBS snapshot of the volume in the source region Copy the EBS volume to the target region using the cross region EBS snapshot copy capability Create a Storag e Gateway volume in the target region from the EBS snapshot 31 Archived Page 22 • Tape Interface Archived tapes are stored in archive which provides offline storage You must first retrieve the tape from the archive back to your gateway and then from the gateway to your client machine More details on the steps can be found here32 Once you retrieve the tape data to your client machine you can store the same data in the target region by creating a tape gateway 33 which is pointing to the target region You should clean up your gateway resources that are associated with your source region to avoid incurring charges for resources you don’t plan to continue using 34 Migrating Database Resources This section covers migration of database services for relational database s NoSQL caching and data warehouse Migrating Amazon RDS Services Amazon RDS is a web service that makes it easy to set up operate and scale a rela tional database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing you to focus on your applications and business Database Security Groups Amazon RDS has its own set of secu rity groups that restrict access to the database service using either a CIDR notation IPv4 network address or an Amazon EC2 security group Each Amazon RDS security group has a name and exists in only one AWS Region (just as an Amazon EC2 security group does) Database Instances and Data The steps require d for migrating Amazon Aurora are different from the other RDS database engines such as Oracle or MySQL Amazon Aurora is a MySQL compatible relational database engine that combines the speed and av ailability of high end commercial databases with the simplicity and cost effectiveness of open source databases You can create an Amazon Aurora database ( DB) cluster as a Read Replica in the target region Read Replicas can be created for encrypted and un encrypted Archived Page 23 DB clusters You must encrypt t he Read Replica if the source DB cluster is encrypted When you create the Read Replica Amazon RDS takes a snapshot of the source cluster and transfers the snapshot to the Read Replica in the target region For eac h data modification made in the source databases Amazon RDS transfers data from the source region to the Read Replica in the target region You can find more details on the steps required for replicating data across regions here35 For database engines other than Aurora you can use the AWS Database Migration Service to migrate databases from the source region to the target region 36 Alternatively you can follow the steps given below You may need to schedule downtime in an application to quiesce the data move the d atabase and resume operation Here is a highlevel overview of the migration process : 1 Stop all transactions or take a snapshot (however changes after this point in time are lost and might need to be reapplied to the target Amazon RDS DB instance) 2 Using a temporary EC2 instance dump all data from Amazon RDS to a file: o For MySQL make use of the mysqldump tool You might want to compress this dump (see bzip or gzip) o For MS SQL use the bcp utility to export data from the Amazon RDS SQL DB instance into files You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database or for just selected objects 37 Note: Amazon RDS does not support Microsoft SQL Server backup file restores o For Oracle use the Oracle Export/Import utility or the Data Pump feature (see http://awsamazoncom/articles/Amazon RDS/4173109646282306 ) o For Postgre SQL you can use the pg_dump command to export data 3 Copy this data to an instance in the target region using standard tools such as CP FTP or Rsync Archived Page 24 4 Start a new Amazon RDS DB instance in the target region using the new Amazon RDS security group 5 Import the saved data 6 Verify that the database is active and your data is present 7 Delete the old Amazon RDS DB instance in the source region For mor e information about importing data into Amazon RDS see Importing Data into a DB instance in the Amazon RDS User Guide 38 Migrating Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability It is a fully managed cloud database and supports both document and key value store models Here is a h ighlevel overview of the process for m igrating DynamoDB from one Region to another: • (Optional) If your source table is not receiving live traffic you can skip this step Otherwise if your source table is being continuously updated you must enable DynamoDB Streams to record these live writes while the table copy is ongoing After the one time table copy (given below) is complete create a replication process that continuously consumes DynamoDB Stream records (generated from the source table) and applie s them to the destination table This will continue until the DynamoDB table in the target region catches up to the DynamoDB table in the source region At this point all new writes should go to the DynamoDB table in the target region For more information on how to do this see Capturing Table Activity with DynamoDB Streams in the Amazon DynamoDB Developer Guide 39 • Start the table copy process You can do this in a few ways : o Use the Import/Export option available via the Amazon DynamoDB console which exports data to Amazon S3 and then imports it to a different DynamoDB table For more information see Exporting and Importing DynamoDB Data Using AWS Data Pipeline in the Amazon Dynam oDB Developer Guide 40 Archived Page 25 o Use the custom Java DynamoDB Import Export T ool available in the Amazon Web Services L abs repository on GitHub that performs a parallel table scan and then writes scanned items to the destination table 41 o Write your own tool to perform the table copy essentially scanning items in the source table and using parallel PutItem calls to write items into the destination table Whichever method you choose to migrate the data you should consider how much read and write throughput will be required for the migration activity and make sure you provision sufficient capacity especially if the table is serving production traffic Migrating Amazon SimpleDB Amazon SimpleDB is a highly available and flexible non relational data store that offloads the work of database administration Developers simply store and query data items via web service requests and Amazon SimpleDB does the rest To copy Amazon SimpleDB data between AWS Regions you need to create a specific job or script that extracts the data from the Amazon SimpleDB domain in one region and copies it to the relevant destination in another region This job should be hosted on an EC2 instance You should use the relevant SDK that suits your purposes and expertise Migration approaches include: • Establishing simultaneous connections to the new and old domain s querying the existing domain for data and then pu tting data into the new domain • Extracting data from the existing domain and storing it in a file (or set of files) and then putting that data into the new domain We recommend that you use the API call BatchPutAttribute to increase performance and decreas e the number of API calls performed A third party solution that may suit your needs is also available from http://backupsdbcom/ Archived Page 26 When you use any third party solution we recommend that you share only specifically secured IAM user credentials that are deleted after the migration takes place Migrating Amazon ElastiCache Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud The service improves the performance of web applications by allowing you to retrieve information from fast managed in memory data stores instead of relying entirely on slower diskbased databases ElastiCache supports two open source in memory engines : Redis and Memcached Here is an o verview of the steps required to migrate an Amazon ElastiCache cluster running Redis : 1 Take a manual b ackup of the ElastiC ache cluster More details for carry ing out a manual backup can be found here42 The backup consists of the cluster's metadata and all of the data in the cluster 2 Export the backup to Amazon S3 using the ElastiCache console the AWS CLI or the ElastiCache API More details for exporting the backup to Amazon S3 can be found here43 3 Copy the backup data from the S3 bucket in the source region to the target region using the process defined earlier 4 Restore the ElastiCache cluster from the backup in the target region The restore operation creates a new Redis cluster and popu lates it44 For an ElastiCache cluster using Memcache d the recommended approach is to start a new ElasticCache cluster and let it start to populate itself through application usage Migrating Amazon Redshift Amazon Redshift is a fast fully managed petabyte scale data warehouse that makes it simple and cost effective to analyze all your data using your existing business intelligence tools We recommend that you pause updates to the Amazon Redshift cluster during the migration process Archived Page 27 Here is a h ighlevel overview of the steps for moving the entire cluster: • Use crossregion snapshot functionality to create a snapshot in the target region Find more details for creating a crossregion snapshot here45 • Restore the cluster from the snapshot When you do Amazon Redshift creates a new cluster with all the snapshot data on the new cluster Find more details for restoring a cluster from a snapshot here46 Here is a h ighlevel overview of the steps for moving specific tables : 1 Connect to the Amazon Redshift cluster in the source region and use the Unload command to export data from Amazon Redshift to Amazon S3 2 Copy your S3 data from the source region to the target region using the steps given earlier 3 Create a n Amazon Redshift cluster and the required tables in the target region 4 Use the COPY command to load data f rom Amazon S3 to the required tables Migrating Analytics Resources This section covers migration of analytics services for interactive query Hadoop and Elasticsearch Migrating Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL We recommend that you run Athena in the same region where the S3 bucket resides Running Athena and Amazon S3 in different regions will result in an increase in latency and inter region data transfer costs Therefore you should first migrate your S3 data from the source region to the target region and then run Athena against your S3 data in the targ et region Archived Page 28 Migrating Amazon EMR Amazon EMR provides a managed Hadoop framework that makes it easy fast and cost effective to process vast amounts of data across dynamically scalable EC2 instances The EMR cluste r must be recreated in the target region Migration of data from the source region to the target region will depend on whether the data is stored in Amazon S3 or the Hadoop Distributed File System ( HDFS ) If the data is stored in Amazon S3 you can follow the steps given earlier to migrate S3 data from the source region to the target region Here is a h ighlevel overview of the migration process if your data is stored in HDFS: • Use the S3DistCp command to copy data residing in HDFS in the source region to Amazon S3 in the target region • Use S3DistCp to copy data f rom Amazon S3 to HDFS in the target region Migrating Amazon Elasticsearch Service Amazon Elasticsearch Service (Amazon ES) is a fully managed service that makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and more You will need to recreate t he Amazon ES domain in the target region Here is a highlevel overview of the process of migrating the data from the source region to the target region : • Create a manual snapshot of your Amazon ES domain The snapshot is stored in a n S3 bucket 47 • Copy your S3 data from the source region to the target region • Restore the snapshot into your Elasticsearch domain in the target region Migrating Application Services and Messaging Resources This section covers migration of application services for queues notifications and Amazon API Gateway Archived Page 29 Migrating Amazon SQS Amazon Simple Queue Service (Amazon SQS) offers a reliable highly scalable hosted queue for storing messages as they travel between computers Amazon SQS queues exist per region To mi grate the data in a queue you need to drain the queue from the source region and insert it into a new queue in the target region When migrating a queue it is important to note if you need to continue to process the messages in order or not When order is not important: 1 Create a new queue in the target region 2 Configure applications to write messages to the new queue in the target region 3 Reconfigure applications that read messages from the Amazon SQS queue in the source region to read from the new queue in the target region 4 Have a script that repeatedly reads from the old queue and submits to the new queue 5 Delete the old queue in the source region when it’s empty When order is important: 1 Create a new firstin first out (FIFO ) queue in the target region 2 Create an additional new temporary FIFO queue in the target region 3 Configure applications to write messages to the new FIFO queue in the target region 4 Reconfigure applications that read messages from the SQS queue in the source region to read from the new temporary FIFO queue in the target region 5 Have a script that repeatedly reads from the old queue and submits to the new temporary FIFO queue 6 Delete the old queue in the source region when it’s empty Archived Page 30 7 When the temporary FIFO queue is empty reconfigure applications to read from the new FIFO queue Then delete the temporary FIFO queue Migrating Amazon SNS Topics Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud Amazon SNS topics exist per region You can recreate these in a target region manually through the AWS Management Console command line tools or direct API calls To list the current Amazon SNS topic in a designated region use the following command: aws sns list topics –region <sourceregionname> For more information about the Amazon SNS CLI tools see Using the AWS Command Line Interface with Amazon SNS 48 Migrating Amazon API Gateway Amazon API Gatew ay is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Here is a h ighlevel overview of the steps required for migrating Amazon API Gateway from the source region to the target region : 1 Export the API from the API Gateway into a Swagger file using the API Gateway Export API 49 2 Copy the Swagger file to the target region using standard to ols like CP FTP or rsynch 3 Import the Swagger file to create the API in the API Gateway in the target region 50 Archived Page 31 Migrating Deployment and Management Resources Migrating with AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources It also enables provisioni ng and updating those resources in an orderly and predictable fashion You can use AWS CloudFormation sample templates or create your own templates to describe the AWS resources and any associated dependencies or runtime parameters required to run applicat ions For more information see What is AWS CloudFormation? 51 While many customers use AWS CloudFormation to create development test and multiple production environments within a single AWS Region these same templates can be reused in other regions You can address disaster recovery and region migration scenarios by running such a template with minor modifications i n another region Commonly AWS CloudFormation templates can be readily used by changing the mapping declarations to substitute region specific information such as the unique IDs for AMIs These can vary across regions as shown below "Mappings" : { "RegionMap" : { "useast1" : { "AMI" : "ami 97ed27fe" } "uswest1" : { "AMI" : "ami 59c39c1c" } "uswest2" : { "AMI" : "ami 9e901dae" } "euwest1" : { "AMI" : "ami 87cef2f3" } "apsoutheast 1" : { "AMI" : "ami c44e0b96" } "apnortheast 1" : { "AMI" : "ami688a3d69" } "saeast1" : { "AMI" : "ami 4e37e853" } } } For more information on mapping declarations see Mapping in the AWS CloudFormation User Guide 52 Archived Page 32 Capturing Environments by Using CloudFormer CloudFormer is a template creation tool that enables you to create AWS CloudFormation templates from the pre existing AWS resources You provision and configure application resources using your existing processes and tools After these resources are provisioned within your environment within an AWS Region the CloudFormer tool takes a snapshot of the resource configurations The tool places t hese resources in an AWS CloudFormation temp late enabling you to launch copies of the application environment through the AWS CloudFormation console The CloudFormer tool create s a starting point for an AWS CloudFormation template that you can customize further For example you can: • Add parameters to enable stacks to be configured at launch time • Add mappings to allow the template to be customized to the specific environments and geographic regions • Replace static values with the Ref and Fn::GetAtt functions to flow property data between resources where the value of one property is dependent on the value of a property from a different resource • Fill in your EC2 instance user data to pass parameters to EC2 instances at launch time • Customize Amazon RDS DB instance names and master password s For more information on setting up CloudFormer to capture a customer resource stack see http://wwwyoutubecom/watch?v=KIpWnVLeP8k 53 For more details on steps required to create a CloudFormation template using CloudFormer see Using CloudFormer to Create AWS CloudFormation Templates fro m Existing AWS Resources 54 API Implications When programmatic access is required to connect to AWS Regions publically defined endpoints must be used for API service requests While some web services allow you to use a general endpoint that does not sp ecify a region these generic endpoints do resolve to the service's specific regional endpoint Archived Page 33 For the authoritative list of current regions and service endpoint URLs see AWS Regions and Endpoints 55 Updating Customer Scripts and Programs You may need to update your self developed scripts and programs that interact with the AWS API (either directly or using one of the SDKs or command line tools ) to ensure that they are communicating with the appropriate regional endpoint Each SDK has its own format for specifying the region being accessed The command line tools generally support the –region parameter Important Considerations Do not leave your AWS certificate or private key on the disk Clear out the shell history file in case you typed secret information in commands or in environment variables Do not leave any password active on accounts Make sure that the image does not include the public S SH key in the authorized_ keys files This leaves a back door into other people’s servers even if they do not intend to use it It is good practice to use the options [ region ] [kernel ] [ramdisk ] explicitly whenever applicable even though those options are optional Verify whether any IP address associations are associated with the AMI If so remove them or modify them with the correct details post migration Conclusions When you undertake any type of system migration we reco mmend comprehensive planning and testing You should be sure to plan all elements of the migration with fail back processes for unanticipated outcomes AWS makes this process easier by enabling cost effective testing and the ability to retain the existing system infrastructure until the migration is successfully completed Contributors The following individuals and organizations contributed to this document: Archived Page 34 • Dhruv Singhal Head Solutions Architect AISPL • Vijay Menon Solutions Architect AISPL • Raghuram Bal achandran Solutions Architect AISPL • Lee Kea r Solutions Architect AWS • Paul Reed Sr Product Manager AWS Document Revisions Date Description February 2020 Minor revisions January 2020 Minor revisions July 2017 First publication 1 http://awsamazoncom/about aws/globalinfrastructure/regional product services 2 http://docsawsamazoncom/IAM/latest/UserGuide/reference_identifiershtml# Identifiers_ARNs 3 http://docsawsamazoncom/general/latest/gr/aws security credentialshtml 4 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml 5 http://docsawsamazoncom/AWSEC2/latest/UserGuide/CopyingAMIshtml 6 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs copy snapshothtml Notes Archived Page 35 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSPerformanceht ml 8 http://awsamazoncom/ec2 9 http://docsawsamazoncom/autoscaling/latest/userguide/GettingStartedTutori alhtml 10 https://awsamazoncom/ec2/pricing/reserved instances/buyer/ 11 http://awsamazoncom/ec2/reserved instances/marketplace/ 12 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_VPNhtml 13 http://docsawsamazoncom/Route53/latest/DeveloperGuide/rrsets working withhtml 14 http://docsamazonwebservicescom/AmazonCloudFront/latest/DeveloperGuid e/HowToUpdateDistributionhtml 15 http://docsamazonwebservicescom/AmazonS3/la test/dev/BucketRestrictions html 16 http://docsamazonwebservicescom/AmazonS3/latest/dev/WebsiteHostinght ml 17 http://docsamazonwebservicescom/AmazonS3/latest/dev/VirtualHostinghtml 18 https://awsamazoncom/premiumsupport/knowledge center/s3 bucket migrate region/ 19 https:/ /awsamazoncom/partners/find/results/?keyword=Storage+ISV 20 http://docsamazonwebservicescom/AmazonS3/latest/API/RESTObjectOpsht ml 21 http://docsamazonwebservicescom/AmazonS3/latest/API/RESTObjectCOPY html Archived Page 36 22 http://docsawsamazoncom/emr/latest/ManagementGuide/emr instance purchasing optionshtml#emr spotinstances 23 http://docsawsamazoncom/emr/latest/Rel easeGuide/UsingEMR_s3distcpht ml 24 http://docsawsamazoncom/AmazonS3/latest/dev/crrhtml 25 https://awsamazoncom/glacier/pric ing/ 26 http://docsawsamazoncom/storagegateway/latest/userguide/create gateway filehtml 27 http://docsawsamazoncom/storagegateway/latest/userguide/G ettingStarted CreateFileSharehtml 28 http://docsawsamazoncom/storagegateway/latest/userguide/managing gateway filehtml#refresh cache 29 http://docsawsamazoncom/storagegateway/latest/APIReference/API_Refres hCachehtml 30 http://docsawsamazoncom/storagegateway/latest/userguide/create volume gatewayhtml 31 http://docsawsamazoncom/s toragegateway/latest/userguide/GettingStarted CreateVolumeshtml 32 http://docsawsamazoncom/storagegateway/latest/usergu ide/backup_netbac kupvtlhtml#GettingStarted retrieving tapes 33 http://docsawsamazoncom/storagegateway/latest/userguide/create tape gatewayhtml 34 http://docsawsamazoncom/storagegateway/latest/userguide/deleting gateway commonhtml 35 https://docsawsamazoncom/AmazonRDS/latest/AuroraUserGuide/AuroraMy SQLReplicationCrossRegionhtml 36 https://awsamazoncom/documentation/dms / Archived Page 37 37 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/SQLServerProce duralImportingSnapshotsh tml#SQLServerProceduralExportingSSGPSW 38 http://docsamazonwebservicescom/AmazonRDS/latest/UserGuide/ImportDat ahtml 39 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Stream shtml 40 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Dynam oDBPipelinehtml 41 https://githubcom/awslabs/dynamodb import export tool 42 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups manualhtml 43 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups exportinghtml 44 http://docsawsamazoncom/AmazonElastiCache/latest/UserGuide/backups restoringhtml#backups restoring CON 45 http://docsawsamazoncom/redshift/latest/mgmt/managing snapshots consolehtml#snapshot crossregioncopy configure 46 http://docsawsamazoncom/redshift/latest/mgmt/managing snapshots consolehtml#snapshot restore 47 http://docs awsamazoncom/elasticsearch service/latest/developerguide/es managedomainshtml#es managedomains snapshot create 48 http://docsawsamazoncom/cli/latest/userguid e/clisqsqueue snstopichtml 49 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway export apihtml 50 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway import apihtml 51 http://docsamazonwebservicescom/AWSCloudFormation/latest/UserGuide/ Welcomehtml Archived Page 38 52 http://docsamazonwebservicescom/AWSCloudFormation/latest/UserGuide/m appings section structurehtml 53 http://wwwyoutubecom/watch?v=KIpWnVLeP8k 54 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cfn using cloudformerhtml 55 http://docsawsamazoncom/general/latest/gr/randehtml
|
General
|
consultant
|
Best Practices
|
Migrating_Microsoft_Azure_SQL_Databases_to_Amazon_Aurora
|
ArchivedMigrati ng Microsoft Azure SQL Database s to Amazon Aurora Using SQL Server Integration Service and Amazon S3 August 2017 This paper has been archived For the latest technical content see: Migrate Microsoft Azure SQL Database to Amazon AuroraArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Abstract v Introduction 1 Why Migrate to A mazon Aurora? 1 Architecture Overview 2 Migration Costs 4 Preparing for Migration to Amazon Aurora 4 Create a VPC 4 Create a Security Group and IAM Role 5 Create an Amazon S3 Bucket 7 Launch an Amazon RDS for SQL Server DB Instance 7 Launch an Amazon Aurora DB Cluster 8 Launch an EC2 Migration Server 10 Schema Conversion 14 AWS Schema Conversion Tool Wizard 14 Mapping Rules 16 Data Migration 17 Set Up the Repository Database 17 Build an SSIS Migration Package 17 After the Migration 33 Conclusion 33 Contributors 33 Further Reading 33 Document Revisions 34 Archived Abstract As companies migrate their workloads to the cloud there are many opportunities to increase database performance reduce licensing costs and decrease administrative overhead Minimizing downtime is a common challenge during database migrations especially for multi tenant databases with multiple schemas In this whitepaper we describe how to migrate multi tenant Microsoft Azure SQL databases to Amazon Aurora using a combination of Microsoft SQL Server Integration Services (SSIS) and Amazon Simple Storage Service (Amazon S3) which can scale to thousands of database s simultaneously while keeping downtime to a minimum when switching to new databases The target a udience for this paper includes: • Database and system administrators perform ing migrations from Azure SQL Databases into Amazon Aurora where AWS managed migration tools can’t currently be used • Database developers and administrators with SSIS experience • IT managers who want to learn about migrating databases and applications to AWS ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 1 Introduction Migrations of multi tenant databases are among the most complex and time consuming tasks handled by database administrators (DBAs) Although managed migration services such as AWS Database Migration Service (AWS DMS)1 make this task easier some multi tenant database migration s require a custom approach For example a custom solution might be required in cases whe re the source database is hosted by a third party provider who limits certain functionality of the database migration engine used by AWS DMS This whitepaper focus es on the mass migration of a multi tenant Microsoft Azure SQL Databa se to Amazon Aurora Amazon Aurora is a fully managed MySQL compatible relational database engine It combines the speed and reliability of high end commercial databases with the simplicity and costeffectiveness of open source databases 2 In the scenario covered in this whitepaper multi tenancy is defined as the deployment of numerous datab ases that have the same schema3 An example of multi tenancy would be a software asaservice ( SaaS ) provider who deploys a database for each customer We discuss how to use the AWS Schema Conversion Tool (AWS SCT)4 to convert your existing SQL Serve r schema to Amazon Aurora We also show you how to build a SQL Server Integration Services (SSIS) package that you can use to automate the simultaneous migration of multiple databases5 The m ethod described in this whitepaper can also be used to migrate to other types of databases on Amazon Web Service s (AWS ) including Amazon Redshift a fully managed data warehouse 6 Why Migrate to Amazon Aurora ? Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multi ple Availability Zones in a n AWS Region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availabi lity Zone is composed of one or more highly available data centers operated by Amazon7 Availability Zones are isolated from each other and are connected through low latency links Each segment of your database volume is replicated six times across these Availability Zones Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact —so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data This enabl es you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 2 Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Am azon Aurora it is increasingly viewed as the go to database for mission critical applications Architecture Overview A diagram of the architecture you can use for migrating a Microsoft Azure SQL database to Amazon Aurora is shown in Figure 1 Figure 1 : Diagram of resources use d in a migration solution The architecture components are explained in more detail as follows Amazon EC2 Migration Server : The migration server is an Amazon Elastic Compute Cloud (EC2) instance that runs all database migration tasks including: • Installing necessary applications • Downloading and restoring the source database for schema conversion purposes • Converting the schema between source and destination databases using AWS SCT ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 3 • Developing and testing the SSIS data migr ation package With a large EC2 instance type your migration server can run thousands of migration tasks simultaneously If your database s are read and write you can choose between two migration approaches : 1 You can disconnect all clients and put your database s into the single connection mode In this scenario the database s won’t be accessible until the migration is finished Database downtime is measure d in migration time The quicker you migrate your databases the shorter the downtime 2 You can keep your database open for write connection In this scenario you will have to adjust the update record after migration If your databases are read only you can keep the connection to them during the migration process with out any impact on the migration process itself Amazon RDS for SQL Server DB Instance : Connection strings to the Azure SQL database and Amazon Aurora database need to be stored in a small repository database For this purpose you ’ll use an Amazon RDS for SQL Server database ( DB) instance Amazon Relational Database Service (Amazon RDS) is a cloud service that makes it easier to set up operate and scale a relational database in the cloud8 It provides cost efficient resizable capacity for an industry standard relational database and manages common database administration tasks Note that the repository database is a temporary resource needed only during the migration It can be terminated after the migratio n Amazon Aurora DB Cluster : An Amazon Aurora DB cluster is made up of instances that are compatible with MySQL and a cluster volume that represents data copied across three Availability Zones as a single virtual volume There are two types of instances i n a DB cluster: a primary instance (that is your destination database) and Aurora Replicas The primary instance performs all of the data modifications to the DB cluster and also supports read workloads Each DB cluster has one primary instance An Auror a Replica supports only read workloads Each DB instance can have up to 15 Aurora Replicas You can connect to any instance in the DB cluster using an endpoint address Amazon S3 Bucket : Multiple batches of your data are loaded in parallel instead of record by record into temporary storage in an S3 bucket which improve s the performance of migration9 After sav ing your data to an S3 bucket in the last step of building an SSIS package (see the Migrate Multiple Azure SQL Databases section ) you’ll execute an Amazon Aurora SQL command to import data from the S3 bucket to the database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 4 Note : You will need to create an Amazon S3 bucket in the same AWS Region where you l aunched the Amazon Auro ra DB c luster Amazon VPC: All migration resources are created inside a virtual private cloud (VPC) Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define10 You have complete control over your virtual networking environment including selection of your own IP address rang e creation of subnets and configuration of route tables and network gateways The topology of the VPC is as follows: • Two private subnets to launch the Amazon RDS DB instance Each subnet must reside entirely within one Availability Zone and cannot span z ones11 • At least two public subnets to launch your migration server and Amazon Aurora DB cluster Each subnet must be in a different Availability Zone Migration Costs These factors have an impact on the migration cost: • Size of the migrated database (S3 st orage) • Size of the Amazon RDS instance • Size of the Amazon Aurora cluster • Size of the migration server Here are a few suggestions to reduce the migration cost: • Use Amazon S3 Reduce Redundancy Storage (RRS) • For the repository database use Amazon RDS SQL Server Express Edition dbt2micro instance • For the migration server start with t2medium instance type and scale up if necessary Preparing for Migration to Amazon Aurora This section describes how to set up and configur e your AWS env ironment to prepare for migrating your Azure SQL database to Amazon Aurora AWS CloudFormation scripts are also provided to help you automate deployment of your AWS resources12 Note : You must complete t hese steps before moving on to the s chema conversion and migration tasks Create a VPC This section describes two ways you can create a VPC: manually or from a CloudFormation template ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 5 Create a VPC ( Manual ) For step bystep guidance on creating a VPC using the Amazon VPC wizard in the Amazon VPC console s ee the Amazon VPC Getting Started Guide 13 For step bystep guidance on creating a VPC fo r use with Amazon Aurora s ee the Amazon RDS User Guide 14 Create a VPC ( CloudFormation Template ) Alternatively y ou can use this CloudFormation template to quickly set up a VPC with two public and two private subnets including a network addres s translation ( NAT ) gateway To create a VPC using the CloudFormation temp late follow these steps: 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF VPCjson 3 Choose Next 4 Enter the Stack name eg VPC (Note the stack name as you will use it later ) 5 Modify the subnet CIDR blocks or leave the default subnet s 6 Choose Next 7 Under Options leave all the default value s and then choose Next 8 Under Review choose Create 9 Wait for the status to change to CREATE_COMPLETE Optional : To improve the performance of uploading data file s to the S3 bucket from within AWS create an S3 endpoint in your VPC For more information visit: https://awsamazoncom/blogs/aws/new vpcendpoint foramazon s3/ Create a Security Group and IAM Role Access to AWS requires credentials that AWS can use to authenticate your requests Those credentials must have permissions to access AWS resources (access control) such as an Amazon RDS database For example you can control acce ss to a database by limiting it to certain IP addresses or IP address ranges and restricting access to your corporate network only or to a web server that consumes data from your database server ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 6 Create a Security Group and IAM Role (Manual ) To migrate yo ur Azure SQL database to Amazon Aurora you need to do the following: • Create an Amazon EC2 security group to control access to an EC2 instance15 • Create an AWS Identity and Access Management (IAM) role that grants the migration server access to both database servers In addition the role grants external access to the migration server Note: When you use an external IP address you should use the IP address from which you will remotely access the migration server The following table shows examples of inbound rules that need to be created in the new EC2 security group: Resource Inbound Port Source Amazon RDS SQL Server 1433 IP of Migration Server Amazon Aurora DB Cluster 3306 IP of Migration Server Migration Server 3389 User external IP address • Create an IAM role for Amazon EC2 to allow migration server access to the S3 bucket This role has to be associate d with the EC2 migration instance during the launch 16 • Create an IAM role and associate it with an Amazon Aurora DB cluster to allow the DB c luster access to the S3 bucket17 Create a Security Group and IAM Role (CloudFormation Template ) Alternatively you can create both roles and the security group w ith all required inbound rules using a CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF SGjson 3 Choose Next 4 Enter the Stack name eg SG (Note the stack name as you will use it later ) 5 Enter the Network Stack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Creat e a VPC (eg VPC) 6 Choose Next 7 Under Options leave all the default values and then choose Next ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 7 8 Under Review check the box : 9 Choose Create Create an Amazon S3 Bucket You can either use an existing S3 bucket or create a new one by follow ing the steps provided in Create a Bucket18in the Amazon S3 documentation Launch an Amazon RDS for SQL Server DB Instance This section explains how to launch an Amazon RDS for SQL Server DB instance Note that the Amazon RDS DB instance is a temporary resource that’s only needed during the migration It should be terminated after the migration to reduce the AWS cost Launch an Amazon RDS for SQL Server DB Instance ( Manual ) To launch a new Amazon RDS for SQL Server DB instance for your repository database follow these steps 1 In the AWS Management Console choose RDS 2 In the navigation pane choose Instances 3 Choose Launch DB Instance 4 Select Microsoft SQL Server and then select SQL Server Express 5 Set DB Instance Class to dbt2micro 6 Set Time Zone to your local time zone 7 Set DB Instance Identifier to repo 8 Set Master Username and Master Password 9 Leave all the other option s as their default values and choose Next Step 10 Select the VPC create d in the previous step If you create d a VPC using the CloudFormation template then the name of the VPC should be “Migration VPC” 11 Select the correct VPC S ecurity Group If you created a security group from the CloudFormation template then the name should be “SGDBSecurityGroup XXXXXXX ” where XXXXXX is a string that includes random letters and numbers 12 Leave all the other options as their default values and choose Launch DB Instance ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 8 Launch an Amazon RDS for SQL Server DB Instance (CloudFormation Template ) As an alternative method to manually launching an Amazon RDS for SQL DB instance you can use this CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSSQLjson 3 Enter the Stack name eg SQL 4 Enter the following parameters: o DBPassword and DBUser o NetworkStack Name which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the SQLServerAddress key You will need it later Launch an Amazon Aurora DB Cluster This section descri bes two ways you can launch an Amazon Aurora DB cluster: manually or from a CloudFormation template Launch an Amazon Aurora DB Cluster ( Manual ) For step bystep guidance for launch ing and configuring an Amazon Aurora DB cluster for your destination database see the Amazon RDS User Guide 19 In our tests we migrated 10 databases simultaneously For this purpose we used the dbr32xla rge DB instance type Depend ing on how many databases you are planning ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 9 to migrate we suggest that you use the biggest DB instance type for the migration and then scale down to one that is more suitable for daily (production) workload s Read this blog to l earn more about how to scale Amazon RDS DB instance s: https://awsamazoncom/blogs/database/scaling your amazon rdsinstance vertically andhorizontally/ Read Managing an Amazon Aurora DB Cluster in the Amazon RDS User Guide to learn more about choosing the right DB instance type To reduce migration time we suggest that you launch your Amazon Aurora DB c luster in a single Availability Zone and then perform a Multi AZ deployment later if required for production workload s When Multi AZ is selec ted Amazon Aurora will create read replicas in different Availability Zones In this scenario when the primary Amazon Aurora DB instance becomes unavailable one of the existing replica s will be promote d to master status in a matter of seconds In a case where Multi AZ is disabled launch ing the new primary instance can take up to 5 minutes Finally load your data to the Aurora DB instance from the S3 bucket To allow Amazon Aurora access to the S3 bucket you need to grant the necessary permission You can do this by follow ing the steps described in the Allowing Amazon Aurora to Access Amazon S3 Resources article 20 Launch an Amazon Aurora DB Cluster ( CloudFormation Template ) As an alternative method to launching an Amazon Aurora DB cluster instead of launching manually you can use this Cloud Formation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSAurorajson 3 Enter the Stack name eg Aurora 4 Enter the following parameters: o DBPassword and DBUser o NetworkStackName which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStackName which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 10 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the AuroraClusterAddress key You will need it later 10 After you launch the cluster assign an IAM role to the cluster To do this follow steps 1 6 in this topic in the Amazon RDS documentation: Authorizing Amazon Aurora to Access Other AWS Services on Your Behalf 21 Note: The name of the role created by the CloudFormation template is RDSAccessS3 Launch an EC2 Migration Server This section describes two ways to launch an EC2 Migration Server: manually and using a CloudFormation template Launch a n EC2 Migration Server (Manual ) To launch the EC2 Migration instance please follow th e documentation 22 Choose these options when launch ing a new EC2 instance: • Amazon Machine Image (AMI) : Microsoft Windows Server 2012 R2 Base • Instance Type : t2large • VPC: select the one you create d in “Create a VPC” • IAM Role : select the EC2 role you created in “ Create a Security Group and IAM Role ” • Add Storage : add two Amazon Elastic Block Store ( EBS) volumes o The f irst volume should be large enough to store all data from the Azure SQL database o The s econd volume should be 10 GB in size Under the snapshot column depend ing on the Region where you are launching the Migration Server enter: Region Snapshot ID useast1 snap 0882e0679e0edbc9d useast2 snap 0f8e882e50e145512 uswest 1 snap 0be3d0aa0c7fd6058 uswest 2 snap 044e09795b0af042d cacentral 1 snap 034a9e106a335e83e euwest 1 snap 0c4f59af047f8c680 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 11 Region Snapshot ID eucentral 1 snap 0b96dab9f8716b8a3 euwest 2 snap 0da47a13ca2333917 apsoutheast 1 snap 09e64c82ad0252691 apsoutheast 2 snap 0116831d4532fa8f0 apnortheast 1 snap 06efa146310714fda apnortheast 2 snap 0dc5415e1c5c58021 apsouth 1 snap 063223b238340215d saeast1 snap 002492e97e9a54b8b o The second volume will contain all the software necessary to accomplish the migration tasks • Security Group : select the security group you created in “ Create a Security Group and IAM Role ” Launch a n EC2 Migration Server (CloudFormation Template ) As an alternative method to launch ing an EC2 Migration Server instead of creating all resources manually you can use this CloudFormation template Server Configuration After launch ing the server either manually or from a CloudFormation template follow these steps 1 Retrieve your Windows Administrator user password The steps for doing this can be found in the article How do I retrieve my Windows administrator password after launching an instance?23 on the AWS Premium Support Center 2 Log in to the Migration Server using the RDP client If you used the CloudFormation template you can get the IP address of the Migration Server from the Output tab under IPAddress key 3 Afte r log ging in open File Explorer and check whether you see the DBTools volume If you see the DBTools volume go to step 5 ; otherwise follow step 4 4 If you do not see DBTools follow these steps: a Run the diskmgmtmsc command to open Disk Management b Under the Disk Management window scroll down until you find a disk that is offline c Right click on the disk and from the context menu se lect Online (as shown in the following screen shot ) ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 12 5 Open the command line and from the DBTools volume run Installbat This will install all the necessary applications All applications to be installed (including the link to download) are listed in Appl ication List as shown in the next screen shot Wait until all the applications are installed This might take up to 30 minutes 6 Open CreateRepositoryDBbat in Notepad and edit the following values: o serverName – This is the address of the SQL Server that you set under “Launch an Amazon RDS for SQL Server DB Instance” If you used a CloudFormation template to launch Amazon RDS you can find this value on the CloudFormation > Output tab under SQLServerAddress key o userName – This is the SQL username o userPass – This is the SQL user password 7 Save the file and execute it This script will create a repository database including the table and stored procedure on Amazon RDS for SQL Server DB instance that was created in the previous section Note: The external IP address associate d with Migration Server has to be added to Azure SQL database firewall Applications List Here is a list of the applications install ed on the Migration Server by the script described in Step 5 in the previous procedure : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 13 • SQL Server – https://wwwmicrosoftcom/en sa/sql server/sql server downloads with minimum selected services • SQL Server Management Studio – https://docsmicrosoftcom/en us/sql/ssms/download sqlserver management studio ssms • SQL Server Data Tools – https://docsmicrosoftcom/en us/sql/ssdt/download sqlserver data tools ssdt • AWS CLI (64bit) – https://awsamazoncom/cli/ • MySQL ODB C Driver (32 bit) – https://devmysqlcom/downloads/connector/odbc/ • Azure PowerShell – https://azuremicrosoftcom/en us/downloads/ • AWS Schema Conversion Tool – http://docsawsamazoncom/SchemaConversionTool/latest/userguide/CHAP_ SchemaConversionToolInstallingh tml • Microsoft JDBC Driver 60 for SQL Server – https://wwwmicrosoftcom/en us/download/detailsaspx?displaylang=en&id=11774 • MySQL JDBC Driver – https://wwwmysqlcom/products/connector/ • Optional: MySQL Workbench – https://devmysqlcom/downloads/workbench/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 14 Schema Conversion Before running the AWS Schema Conversion Tool the Azure SQL database schema needs to be restored on the Migration Server This can be done either by recreating the database from a script/backup or by restoring it from a BACPAC file For information on how to export an Azure SQL database to a BACPAC file see this article on the Microsoft Azure website24 Alternatively you can execute a PowerShell script to export the Azu re SQL database to a BACPAC file as follows : 1 Use Remote Desktop Protocol ( RDP) to connect to the Migration Server 2 Locate the AzureExportps1 PowerShell script on the DBTools volume and open it in Notepad for editing 3 Modif y the values at the top of the sc ript When you are done save the changes you made 4 Open PowerShell and execute the script by entering e:\ AzureExportps1 5 When the script has executed you should see the xxxxbacpacfile in your local folder 6 To restore the database from bacpac file open the SQL Server Management Studio connect to the Migration Server (wh ich is the local server) right click on the database name and from the menu select Import Data tier Application Then follow the wizard For more information on how to import a PACPAC file to create a new user database see: https://docsmicroso ftcom/en us/sql/relational databases/data tier applications/import abacpac filetocreate anew user database AWS Schema Conversion Tool Wizard Before migrating the SQL Server database to Amazon Aurora you have to convert the existing SQL schema to the new format supported by Amazon Aurora The AWS Schema Conversion Tool helps convert the source database schema and a majority of the custom code to a format that is compatible with the target database This is a desktop application that we installed on the desktop of the Migration Server The custom code includes views stored procedures and functions Any code that the tool cannot automatically convert is clearly marked so that you can convert it yourself To start with AWS SCT follow these steps: 1 After restoring the database open the AWS Schema Conversion Tool 2 Close the AWS SCT Wizard if it opens automatically 3 From Settings select Global Settings ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 15 4 Under Drivers select the path s to the Microsoft Sql Server and MySql drivers You can find both drivers on the DBTools volume in following locations: SQL Server : E:\Drivers \Microsoft JDBC Driver 60 for SQL Server \sqljdbc_60 \enu\jre7\sqljdbc41jar MySQL : E:\Drivers \mysql connector java5141 \ mysql connector java5141 bin 5 Choose OK 6 From File select New Project Wizard 7 In Step 1: Select Source for Source Database Engine select Microsoft SQL Server 8 Set the following c onnection parameters to the EC2 Migration SQL Server (local server): o Server name : the name of the EC2 Migration Server If you didn’t chang e it it will be something like : WIN ITKVVM7QQ08 o Server port : 1433 o User name : sa o Password : sa password – if you inst alled everything from the Installbat script the password will be Password1 9 Choose Test Connection 10 If the connection is successful choose Next Otherwise verify the connection parameters 11 In Step 2: Select Schema select the database that was restored from the bacpac file and choose Next 12 In Step 3: Run Database Migration Assessment choose Next 13 In Step 4: Select Target set the following parameters : o Target Database Engine : Amazon Aurora (MySQL compatible) o Server name : The Amazon Aurora Cluster Endpoint If you launched the Amazon Aurora DB cluster from the CloudFormation template you can find the cluster endpoint on the CloudFormation output tab under AuroraConnection va lue o Server port : 3306 o User name : The Aurora master user name o Password : The Aurora master password 14 Choose Test Connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 16 15 If the connection test is successful choose Finish Otherwise check the connection parameters Mapping Rules In some cases you might need to set up rules that change the data type of the columns move objects from one schema to another and change the names of objects For example if you have a set of tables in your source schema named test_TABLE_NAME you can set up a rule that changes the prefix test_ to the prefix demo_ in the target schema To add mapping rules perform the following steps : 1 From Actions menu of AWS SCT choose Convert Schema 2 The converted schema appears in the right hand side of AWS SCT The schema name will be in the following format: {SQL Server database name}_{database schema} For example tc_dbo 3 To rename the output schema from Settings choose Mapping Rules 4 Choose Add new rule to create a rule for renaming the database 5 Choose Edit rule 6 From the For list select database For Actions select rename and then type a new database name 7 Choose Add new rule to create a rule for renaming the database schema 8 From the For list select schema For Actions select rename and then type a new schema name 9 Choose Save All and close the window 10 Run Convert Schema The schema should now be updated with the new settings In this example the new schema name is TimeCard_Customer1 By right clicking on the new schema name you can eithe r save t he schema as an SQL script by selecting Save as SQL or apply it directly to the Amazon Aurora database by selecting Apply to database Depend ing on the complexity of the SQL Server schema the new schema might not be optimal or cor rectly convert all objects Note : As a rule of thumb you should always look at the new schema and make necessary adjustment s and optimization ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 17 If you have a small number of databases on Azure SQL (~10 or fewer ) you can apply the schema for each database by modif ying the rule for the schema name running Convert Schema and then apply ing it to the destination database If you are hosting hundreds or thousands of databases a more efficient way to apply the new schema would be to save it as an SQL script and then create a script using Bash (Linux) or PowerShell (Windows) to read an exported schema file modif y the schema name and save it as a new file ; then use a tool such as MySQL Workbench25 or a command line tool such as mysql to apply the script to the Amazon Aurora database You can find mysql here: C:\Program Files \MySQL \MySQL Workbench 63 CE Data Migration You ar e now ready to migrate the data First you need to set up the repository database and then you need to build an SSIS migration package Set Up the Repository Database From the Migration Server connect to the Amazon RDS repository ( MigrationCfg ) database us ing SQL Server Management Studio P opulate the ConnectionsCfg table with the following values: • MSSQLConnectionStr : The Azure SQL connection string which has the following format: DataSource= youraureserver databasewindowsnet;User ID=user_name ;Password= db_password ;Initial Catalog=TimeCard1;Provider=SQLNCLI111;Persist Security Info=True;Auto Translate=False; • MySQLConnectionStr : The Amazon Aurora connection string which has the following format: DRIVER={My SQL ODBC 53 ANSI Driver};SERVER=your_aurora_closter_endpoint;DATABASE=TimeCard_Custom er1;UID=user_name;Pwd=db_password; • StartExecution : Indicate s if the migration for the given database has already started This value should i nitially be set to 0 • Status : Upon completion of the database migration the status will either be Success or Failed depend ing on the migration outcome • StartTime and EndTime : These are the statistic s column s that show the database migration start and end times • DBName : Can be any string unique across all records This string will be used as the prefix in the file name of the file contain ing exported data Build an SSIS Migration Package To build an SSIS Migration Packa ge perform the following steps Create a New Project 1 On the D:\ drive create a new folder called Output 2 Open the SQL Server Data Tool 2015 application ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 18 3 Select File then New and then Project 4 From Templates select Integration Services and then s elect Integration Service s Project 5 Name your project 6 Choose OK 7 Under Solution Explorer right click on the project name and select Convert to Package Deployment Model 8 Rename you r package from Packagedtsx to something more meaning ful eg SQLMigrationdtsx 9 In Properties under Security change ProtectionLevel to EncryptSensitiveWithPassword 10 Choose PackagePassword and set the password ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 19 Set the SSIS Variables 1 From the SSIS menu select Variables 2 Add the following variables: Variable Name Variable Type ConfigID Int32 DBName String MSConnectionString String MyConnectionString String S3Input_LT1 String 3 For S3Input_LT1 add the following expression: LOAD DATA FROM S3 's3 useast1://yours3bucket/"+ @[User::DBName]+"_TL1txt' INTO TABLE [Your_First_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); 4 Adjust the table name and column name s to reflect your database schema 5 Repeat the last step to create multiple S3Input_LTx variable s—one for each table For example if you have 10 tables then you should have : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 20 S3Input_LT1 … S3Input_LT1 0 6 Modify the expression for each variable accordingly For e xample the last variable will have this expression : LOAD DATA FROM S3 's3 useast1://yours3bucket/"+ @[User::DBName]+"_ TL10txt' INTO TABLE [Your_Last_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); Notice that in each variable expression the table name as well as file name should be different When you are done you should have following variables: Retrieve Configurations from Repository Database 1 From the SSIS Toolbox drag and drop Execute SQL Task on Control Flow 2 Double click Execute SQL Task 3 Under General change ResultSet to Single row 4 Under SQL Statement exp and the list and select New connection Set up a new connection to your Amazon RDS SQL Server repository database 5 Set SQLStatement to EXEC [sp_GetConnectionStr] ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 21 6 Under Result Set add the following four rows: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 22 Create Data Migration Flow Follow the steps b elow to create a data flow from Azure SQL Server to Amazon Aurora To migrate multiple database tables simultaneously put all data flows inside Sequence Container by follow ing these steps: 1 From the SSIS Toolbox drag and drop Sequence Container onto the Control Flow panel 2 Select Get Connection Strings and connect the green arrow to Sequence Container Output Data to Temporary File 1 From the SSIS Toolbox drag and drop Data Flow Task into Sequence Container ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 23 2 Double click Data Flow Task 3 From the SSIS Toolbox drag and drop Source Assistance onto the new Data Flow Task panel 4 Under Source Type select SQL Server Under Connection Managers select new 5 Choose OK 6 Set up a connection to one of your Azure SQL databases 7 When done you should see OLE DB Source on the Data Flow Task panel Double click it 8 From the Name of table or the view menu select the first table that you want to migrate and c hoose OK 9 From the SSIS Toolbox expand Other Destinations and drag and drop Flat File Destination onto Data Flow panel 10 Select OLE DB Source and connect the green arrow to Flat File Destination 11 Double click on Flat File Destination Under Flat File connection manager choose New ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 24 12 Select Delimiter and choose OK 13 Under File name enter D:\Output \temptxt and choose OK 14 Choose Mapping You should see the following : 15 Choose OK The Data Flow Task panel should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 25 16 Under Connection Manager s select the newly created connection to the Azure SQL database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 26 17 Under Properties : a Change DelayValidation to False Choose OK a Choose Expressions Under Property select Connection String Under Expression enter : @[User::MSConnectionString] 18 Repeat steps 16 17 for Flat File Connection but set the Connection String expression to: D:\\Output \\"+@[User::DBName]+"_TL1txt ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 27 19 Change DelayValidation to False 20 Under Control Flow select Data Flow Task Under Properties change DelayValidation to True Copy Temporary Data File to Amazon S3 Bucket 1 From the SSIS Toolbox drag and drop Execute Process Task into Sequence Container 2 Select Data Flow Task and connect the green arrow to Execute Process Task The new flow should look l ike this: 3 Double click Execute Process Task and make following changes: • Under Process : o Executable : C:\Program Files \Amazon \AWSCLI \awsexe o Working Directory : C:\Program Files \Amazon \AWSCLI • Under Expressions : o Property : Arguments ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 28 o Expression : "s3 cp D:\\Output \\"+ @[User::DBName]+"_TL1txt s3:// your s3bucket " 4 Choose OK 5 Select Execute Process Task Under Properties change DelayValidation to False Import Data from Temporary File to Amazon Aurora 1 From the SSIS Toolbox drag and drop Execute SQL Task into Sequence Container 2 Select Execute Process Task and connect the green arrow to Execute SQL Task The new flow should look like this: 3 Double click Execute SQL Task ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 29 4 Change ConnectionType to ADONET 5 Under Connection select New connection Choose New 6 Under Provider select Net Providers Odbc Data Provider 7 Check Use connection string and enter the following connection string: Driver={MySQL ODBC 53 ANSI Driver};server= aurora_endpoint ;database=TimeCard_ Customer 1 ;UID=aurora_us er;Pwd=aurora_password ; 8 Under General s et SQLSourceType to Variable and set SourceVariable to User:S3Input_LT1 Choose OK 9 Under Connection Managers select your Aurora connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 30 10 Under Properties change DelayValidation to True 11 Choose Expressions Under Property select Connection String Under Expression enter : @[User::MyConnectionString] For each table that you want to migrate r epeat all steps define d in the following sections : Output Data to Tem porary File Copy Temporary Data File to Amazon S3 Bucket Import Data from Temporary File to Amazon Aurora Reuse connection managers for Azure SQL and Amazon Aurora cluster The Flat File connection needs to be set up for each table separately In addition for each table : • Change the Connection String expression as follow s: o For the second table: D:\\Output \\"+@[User::DBName]+"_TL2 txt o For the third table: D:\\Outpu t\\"+@[User::DBName]+"_TL3 txt o and so on • Under Expression change the file name as follow s: o s3 cp D: \\Output \\"+ @[User::DBName]+"_ TL2txt s3:// your s3bucket o s3 cp D: \\Output \\"+ @[User:: DBName]+"_ TL3txt s3:// your s3bucket o and so on • Change SourceVariable as follow s: o For the second table : to S3Input_LT2 o For the third table : to S3Input_LT3 o and so on Tracking Migration Status The database migration completion status either success or failed is store d in the repository database To track the status follow these steps: 1 Drag and drop Execute SQL Task below Sequence Container 2 Select Sequence Container and connect the green arrow to Execute SQL Task 3 Double click Exec ute SQL Task 4 Under Connection select the connection to your Amazon RDS SQL Server Express repository database 5 Under SQLStatement enter: UPDATE [ConnectionsCfg] SET [Status] = 'Success' EndTime = GETDATE() WHERE [CfgID] = ? ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 31 6 Under Parameter Mapping add a new record with the following variable name : 7 Choose OK 8 Repeat step s 16 Modify the SQL Statement as follows : UPDATE [Connect ionsCfg] SET [Status] = 'Failed ' EndTime = GETDATE() WHERE [CfgID] = ? 9 Select the green arrow connecting Sequence Container with Execute SQL Task 10 Under Properties change Value to Failure The final flow should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 32 11 Save and build the package You can test the package by executing it directly from Visual Studi o Migrate Multiple Azure SQL Databases Packages will migrate a single database To migrate multiple databases simultaneously create a Windows batch file that will call the SSIS package You can use the following command to call the SSIS package: cd C:\Program Files \Microsoft SQL Server \130\DTS\Binn dtexec /F "C: \SSIS\SQLMigrationdtsx" /De your_package_password Now you can execute the batch file simultaneously as many times and for as many databases as you set up in the Repository database In case of hundreds or thousands of databases the migration process should be split across multiple EC2 instances Here is one approach for setting up multiple instance s: 1 Determin e the optimal number of databases that can be migrated by a single EC2 instance (Migration Server) For instance you can start test migrating 20 databases using a single instance By monitoring the CPU and memory usage of the Migration Server you can either in crease or decrease the count of databases You could also change to a larger EC2 instance type 2 In Windows startup set up execution of multiple migration scripts – up to maximum determined in the previous step 3 Create an AMI of the instance 26 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 33 4 Create an Auto Scaling group based on the AMI with the total EC2 instances required to migrate all databases 27 Note : You can find an example of an SSIS package on the Migration Server on the DBTools volume in /Apps/ SQLMigration S3dtsx or you can download it from http://rh migration blogs3amazonawscom/SQL Migration S3dtsx After the Migration When your databases are running on Amazon Aurora here are a fe w suggestions for next steps: • Review the best practices for Amazon Aurora • Review and optimize indexes and queries • Monitor your Amazon Aurora DB cluster • Consider Amazon Aurora with PostgreSQL as an alternative option to Amazon Aurora with MySQL Conclusion This whitepaper described one method for migrating multi tenant Microsoft Azure SQL databases to Amazon Aurora Other methods exist We tested our solution a few times using the following configurations : • Source databases o 10 databases each with 10 tables o Each table had 500K records o Size of a single database was ~450 MB • Destination database o Single Amazon Aurora Cluster running on a dbr38xlarge instance class o 10 packages were executed simultaneously on an EC2 m44xlarge instance type • Total migration time of all 10 databases : ~3 minutes We found that across the tests that we did all of the results were consisten t Contributors The following individuals and organizations contributed to this document: • Remek Hetman Senior Cloud Infrastructure Architect Amazon Web Services • Yoav Eilat Senior Product Mar keting Manager Amazon Web Services Further Reading For additional information see the following : • https://awsamazoncom/rds/aurora/ • https://awsamazoncom/documentation/SchemaConversionTool/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 34 • https://awsamazoncom/cloudformation/ • https://awsamazoncom/vpc/ Document Revisions Date Description August 2017 First publication Notes 1 https://awsamazoncom/dms/ 2 https://awsamazoncom/rds/aurora/ 3 https://msdnmicrosoftcom/en us/library/aa479086aspx 4 https://awsamazoncom/documentation/SchemaConversionTo ol/ 5 https://docsmicrosoftcom/en us/sql/integration services/ssis how tocreate anetl package 6 https://awsamazoncom/redshift/ 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 8 https://awsamazoncom/rds/ 9 https://awsamazoncom/s3 10 https://awsamazoncom/vpc/ 11 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 12 https://awsamazoncom/cloudformation/ 13 http://docsawsamazoncom/AmazonVPC/latest/GettingStartedGuide/getting started ipv4html 14 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCreateVPChtml 15 http://docsawsamazoncom/Am azonVPC/latest/UserGuide/VPC_SecurityGroupsht ml#CreatingSecurityGroups ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 35 16 http://docsawsamazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html 17 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 18 http://docsawsamazoncom/AmazonS3/latest/gsg/CreatingABuckethtml 19 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCrea teInstance html 20 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 21 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml#AuroraAut horizingAWSServicesAddRoleToDBCluster 22 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2_GetStartedhtml 23 https://awsamazoncom/premiumsupport/knowledge center/retrieve windows admin password/ 24 https://docsmicrosoft com/en us/azure/sql database/sql database export 25 https://devmysqlcom/downloads/workbench/ 26 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSbacked_ WinAMIhtml 27 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSb acked_ WinAMIhtml
|
General
|
consultant
|
Best Practices
|
Migrating_Oracle_Database_Workloads_to_Oracle_Linux_on_AWS
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Migrating Oracle Database Workloads to Oracle Linux on AWS Guide January 2020 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Overview 1 Amazon RDS 1 Oracle Linux AMI on AWS 2 Support an d Updates 3 Lift and Shift to AWS 4 Migration Path Matrix 5 Migration Paths 6 Red Hat Linux to Oracle Linux 6 SUSE Linux to Oracle Linux 6 Microsoft Windows to Oracle Linux 7 Migration Methods 7 Amazon EBS Snaps hot 7 Oracle Data Guard 9 Oracle RMAN Transportable Database 11 Oracle RMAN Cross Platform Transportable Database 11 Oracle Data Pump Export/Import Utilities 12 AWS Database Migration Service 12 Other Database Migration Methods 13 Enterprise Application Considerations 13 SAP Applications 13 Oracle E Business Suite 15 Oracle Fusion Middleware 17 Conclusion 17 Contributors 17 Document Revisions 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers About this Guide Oracle databases can run on different operating systems (OS) in on premises data centers such as Solaris (SPARC) IBM AIX and HP UX Amazon Web Services (AWS) supports Oracle Linux 64 and higher for Oracle databases This guide highlights the migration p aths available between different operating systems to Oracle Linux on AWS These migration paths are applicable for migrations from any source —onpremises AWS or other public cloud environments This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 1 Overview Oracle workloads benefit tremendously from many features of the AWS Cloud such as scriptable infrastructure instant provisioning and de provisioning scalability elasticity usage based billing managed database services and the ability to support a wide variety of operating systems (OSs) When migrating your workloads choosing which operating system to run them is a crucial decision We highly recommend that you choose an Oracle supported operating system to run Oracle software on AWS You can use the follow ing Oracle supported operating systems on AWS: • Oracle Linux • Red Hat Enterprise Linux • SUSE Linux Enterprise Server • Microsoft Windows Server Specific Oracle supported operating systems can be used for specific database middleware and application workloads For example SAP workloads on AWS require that Oracle Database be run on Oracle Linux 64 or higher You have many methods for migrating your Oracle databases to Oracle Linux on AWS This guide documents the different migration paths available for the va rious source operating systems It covers migrations from any source —onpremises AWS or other public cloud environments Each migration path offers distinct advantages in terms of downtime and human effort You can choose the best migration path for your business based on your specific needs Amazon RDS For most workloads a managed database service is the preferred method Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on several database instance types —optimized fo r memory perform ance or I/O In addition Amazon RDS provides you with six familiar database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 2 engines to choose from including Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server You can use the AWS D atabase Migration Service (AWS DMS) to easily migrate or replicate your existing databases to Amazon RDS Amazon RDS for Oracle supports Oracle Database Enterprise Edition Standard Edition Standard Edition 1 and Standard Edition 2 Amazon RDS Oracle Sta ndard Editions support both Bring Your Own License (BYOL) and License Included (LI) If you are exploring other database platforms Amazon RDS offers you a choice of database engines and tools such as AWS D atabase Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) to make the migration process easier Oracle Linux AMI on AWS If you choose not to use a managed database and instead manage the Oracle database yourself you can deploy it on Amazon Elastic Compute Cloud (Amazon EC2) Oracle Linux EC2 instances can be launched using an Amaz on Machine Image (AMI) available in the AWS Marketplace or as a Community AMI You can also bring your own Oracle Linux AMI or existing Oracle Linux license to AWS In that case y our technology stack is similar to the one used by Amazon RDS for Oracle wh ich also runs on Linux based operating systems Use migration tools such as Oracle Data Pump Export/Import or AWS DMS These tools take care of migration from different OS platforms to EC2 and/or RDS for Oracle The AWS Marketplace listing for Oracle Linux is through third party vendors You will find a list of Community AMIs and Public AMIs by searching for the term “OL6” or “OL7” Public AMI listings are available in the EC2 section of the AWS Management Console under Images then AMI Two types of AMIs a re available for the same release version: • Hardware Virtual Machine (HVM) • Paravirtual Machine (PVM) HVM is an approach that uses virtualization features of the CPU chipset If a virtual machine runs in HVM mode the kernel of the OS may run unmodified PVM does not use virtualization features of the CPU chipset PVM uses a modified kernel to achieve virtualization AWS supports both HVM and PVM AMIs The Unbreakable Enterprise Kernel for Oracle Linux natively includes PV drivers SAP has specific recommendations of HVM virtualized AMIs for SAP installations The Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 3 Linux AMI published by Oracle are available in the list of Community AMIs in AWS Mark etplace Community AMIs do not have any official support Refer to the following table for some of the AMI listings: Table 1: Community AMIs Version AMI Oracle Linux 73 HVM OL73 x86_64 HVM Oracle Linux 73 PVM OL73 x86_64 PVM Oracle Linux 72 HVM OL72 x86_64 HVM Oracle Linux 72 PVM OL72 x86_64 PVM Oracle Linux 67 HVM OL67 x86_64 HVM Oracle Linux 67 PVM OL67 x86_64 PVM Anyone can upload and share an AMI Use caution when selecting an AMI Reach out to AWS Business Support or your vendor support for assistance In addition to an existing AMI you can import your own virtual machine images as AMIs in AWS Refer to the VM Import/Export page for more details This option is highly useful when you have heavily customized virtual machine images available in other cloud environments or your own data center Support and Updates Oracle offers Basic Basic Limited Premier and Premier Limited commercial support for Oracle Linux EC2 instances Refer to Oracle’s cloud license document for the in stance requirements The following table shows the level of support available for various AMI options Table 2: Support levels Option Support level AWS Marketplace Basic Support and Basic Limited This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 4 Option Support level BYOL (Bring Your Own License) Basic Basic Limited (up to 8 virtual cores) Premier Premier Limited (up to 8 virtual cores) Community AMI No commercial support If you have an Oracle Linux support contract you can register your EC2 instance using the uln_register command on your EC2 instance This command requires you to have access to an Oracle Linux CSI number Review the Oracle Linux Unbreakable Linux Network (ULN) user guide on the steps for ULN channel subscription and how to register your Oracle Linux instance Oracle Linux instances require intern et access to the public yum repository or Oracle ULN in order to download packages All Oracle Linux AMIs can access the public yum repository Only licensed Oracl e Linux systems can access the Oracle ULN repository If the EC2 instance is on a private subnet use a proxy server or local yum repository to download packages Oracle Linux systems (OL6 or higher) work with the Spacewalk system for yum package management A Spacewalk system can be in a public subnet while Oracle Linux systems can be in a private subnet The following sections detail migration path methods availa ble for Oracle databases These migration methods are available for Oracle 10g 11g 12c and 18c For other Oracle products see the respective product support notes in Oracle’s MyOracleSupport portal Lift a nd Shift to AWS Existing Oracle workloads can be migrated from existing on prem or virtualized environment to Amazon EC2 with no changes required (Lift and Shift) using CloudEndure Migration CloudEndure Migration executes a highly automated machine conversion and orchestration process allowing even the most complex applications and databases to run natively in AWS without compatibility issues CloudEndure Migration uses a continuous block leve l replication process Servers are replicated to a staging area temporarily until you are ready to cut over to your desired instance target This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 5 CloudEndure Migration replicates your existing server infrastructure via its client software as a background proces s without application disruption or performance impact Once replication is complete CloudEndure Migration allow s you to cut over your servers to the instance family and type of your choice via customized blueprints Using your blueprint you can test you r deployment before committing to an instance family and type CloudEndure Migration supports Oracle Linux Redhat Linux Windows Server and SUSE Linux For detailed version compatibility information see Supported Operating Systems CloudEndure Migration is provided at no cost for migrations into AWS Migration Pat h Matrix A migration path matrix assumes that only the operating systems change and other software versions remain the same We recommend that you change other components such as the Oracle database version or Oracle database patching separately to avoid complexity The database version and any other application version in both source and target EC2 instances should remain the same to prevent deviations in the migration path There are also vendor data replication and migration tools available that can su pport platform migration See the Migration Methods section for the list of methods Table 3: Migration methods Source database operating system Migration methods Red Hat Linux Amazon EBS snaps hot Oracle Data Guard SUSE Linux Amazon EBS snapshot Oracle Data Guard Microsoft Windows Oracle Data Guard 11g RMAN Transportable Tablespace HPUX Solaris (SPARC) RMAN Cross platform Transportable Tablespace This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 6 Migration Paths This section presents three paths for migrating to Oracle Linux on AWS Red Hat Linux to Oracle Linux Oracle Linux and Red Hat Linux are compatible operating systems When migrating from Red Hat Linux to Oracle Linux migrate to the same version level for example Red Hat Linux 64 to Oracle Linux 64 or Red Hat Linux 72 to Oracle Linux 72 Also ensure that both operating systems are patched to the same level You can migrate Red Hat Linux to Oracle Linux using either of these methods : • Amazon Elastic Block Store (Amazon EBS) snapshot • Oracle Data Guard An EBS snapshot is a faster migration method than Oracle Data Guard for non Oracle Automatic Storage Management (ASM) databases If your databases use Oracle ASM then Oracle Data Guard is a bett er choice Other standard methods such as the Oracle Recovery Manager (RMAN) and Oracle Export and Import utilities can work across operating systems However these methods require a large r downtime and a greater amount of human effort Choose the Export and Import utilities method if your specific use case requires it See the Migration Methods section for details on each migration method SUSE Linux to Oracle Linux SUSE Linux Enterprise Server (SLES) is an enterprise grade Linux offering from SUSE Oracle Linux and SUSE Linux are binary compatible That is you can move an executable directly from SUSE Linux to Oracle Linux and it will work It must match the same C compiler and bit architecture (32 bit or 64 bit) SLES follows a different versioning scheme than Oracle Linux so there is no easy way to match similar operating system versions Additionally the Linux kernel version gcc versions and bit architecture must match Contact SLES Technical Support to find which Oracle Linux version is compatible with the SLES operation system SLES Linux can also be migrated using EBS snapshots and Oracle Data Guard just as you can do with Red Hat Linux Again these metho ds have less downtime and require less human effort than Oracle RMAN or Oracle Export/Import This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Data base Workloads to Oracle Linux on AWS 7 An EBS snapshot is a much quicker and simpler method than Oracle Data Guard Whichever method you select we recommend that you don’t copy the binaries from SLES but rather perform a fresh Oracle home installation on your Oracle Linux EC2 instance The reason for this recommendation is to properly generate the Oracle Inventory directory (oraInventory) in the new Oracle Linux EC2 instance and also have the files cr eated by rootsh Simply copying Oracle home may not create oraInventory and rootsh may not create the new files Also ensure the patch level of the newly created database binary home is exactly the same as the one in the SLES instance See the Migration Methods section for details on each migration method Microsoft Windows to Oracle Linux Microsoft Windows is a completely different operating system than the various types of Linux operating systems The following mi gration methods are available for Windows: • Oracle Data Guard (heterogeneous mode) • Oracle RMAN transportable tablespace (TTS) backup and restore The Oracle Data Guard method requires much less downtime compared to the Oracle RMAN TTS method The RMAN TTS me thod still requires copying the files from your onpremises data center or source database servers to AWS Files of significant size will extend the migration time There are several methods available such as AWS Import/Export and AWS Snowball which can handle the migration of large volumes of files Transferring large volume of files over the network takes time AWS Import/Export and AWS Snowball can help by migrating the data offline using physical media devices See the Migration Methods section for details on each migration method Migration Methods Your choice of migration method depend s on your specific use case and context Repeated testing and validation is necessary before finalizing and performing on the production workload Amazon EBS Snapshot An EBS snapshot is a storage level backup mechanism It preserves the contents of the EBS volume as a point intime copy If you are migrating databases from RHEL or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 8 SUSE to Oracle Linux EBS snapshot is one of the fastest migration methods This method is applicable only if the source database is already on AWS and running on Oracle EBS storage It is not applicable for on premises databases or non AWS Cloud services The high level migration steps are: 1 Create a new Amazon EC2 instance based on Oracle Linu x AMI 2 Install an Oracle home on the new Oracle Linux EC2 instance 3 Create the new database parameter files and TNS files 4 Take an EBS snapshot of the volumes in the older EC2 instance (Red Hat Linux SUSE Linux) If possible we recommend that you take an EBS snapshot during downtime or off peak hours 5 Create a new volume based on the EBS snapshot and mount it on your Oracle Linux EC2 instance 6 Perform the post migration steps such as verifying directory and file permissions 7 Start the Oracle datab ase on the Oracle Linux EC2 instance You can take a snapshot of the Oracle home as well as the database files However we recommend that you install Oracle home binaries separately on the new Oracle Linux EC2 instance The Oracle home installation create s a few files in operating system root that may not be available if you create a snapshot and mount the binary home The EBS snapshot can be taken while the database is running but the snapshot will take longer to complete Conditions for Taking an Amazon EBS Snapshot • When you create the new volume on the target Oracle Linux EC2 instance ensure that the volume has the same path as the source EC2 instance If database files reside in the /oradata mount in the source EC2 instance the newly created volume fr om the snapshot should be mounted as /oradata in the target Oracle Linux EC2 instance It is also recommended but not required to keep the Oracle database binary home the same between source and target EC2 instances • The Unix ID number for the Oracle use r and the dba and oinstall groups should be the same number as the source operating system This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 9 For example the Oracle Linux 11g/12c pre install rpm creates an Oracle user with Unix ID number 54321 which may not be the same as the source operating system ID If it is different change the Unix ID number so that both source and target EC2 instances match • An EBS snapshot works well if all the database files are in the single EBS volume The complexity of an EBS snapshot increases when you use multiple EBS volu mes or you use Oracle ASM Refer to Oracle MOS Note 6046831 for recovering crash consistent snapshots Oracle 12c has additional features to recover from backups taken from crash consistent snapshots For more details see Amazon EBS Snapshots Oracle Data Guard Oracle Data Guard tech nology replicates the entire database from one site to another It can do physical replication as well as logical replication Oracle Data Guard operates in homogen eous mode if the primary and standby database operating systems are the same The normal Ora cle Data Guard setup would work in this case However if you are migrating from 32 bit to 64 bit or from AMD to Intel processors or vice versa it is considered to be a heterogeneous migration even if the operating system is the same Heterogeneous mode requires additional patches and steps while operating Oracle Data Guard Homogen eous Mode In homogen eous mode the source and destination operating systems are the same Oracle Data Guard send s the changes from the primary (source) database to the standby database If physical replication is set up the changes of the entire database are captured in redo logs These changes are sent from the redo logs to the standby database The standby database can be configured to apply the changes immediately or at a d elayed interval If logical replication is set up the changes are captured for a configured list of tables or schemas Logical replication does not work for the use case of migrating the entire database unless your situational constraints require it See the Oracle Data Guard Concepts and Administration Documentation for both physical and logical standby setups Heterogeneous Mode In heterogeneous mode Oracle Data Guard allows primary and standby databases in different operating systems and different binary levels (32 bit or 64 bit) Until Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 10 11g Oracle Data Guard required that both primary and standby databases have the same operating system level From 11g onward Oracle Data Gua rd has been in heterogeneous mode This allows Oracle Data Guard to support mixed mode configurations The source primary database can have a different operating system or binary level Heterogeneous set up of Oracle Data Guard is recommended for large and very large databases We present a few suggestions below which can further optimize your migration It is essential that Oracle database home on Windows and Linux has the latest supported version of the database (11204 or 12102) along with latest q uarterly patch updates Multiple migration issues were fixed in the latest patch updates Due to the mixed operating systems in the migration path we recommend that you use the Data Guard command line interface (DGMGRL) to set up Oracle Data Guard and perform role transition See Oracle MOS Note 4134841 for more details on using Oracle Data Guard to transition from Microsoft Windows to Linux This migration requires some additional patches which are detailed in the Note Also see MOS Note 4140431 for the role transition when you migrate from Windows 32 bit to Oracle Linux 64 bit Detailed steps for setting up Oracle Data Guard between Windows and Linux i s available in Oracle MOS Note 8814211 To set up Oracle Data Guard between Windows and Linux Oracle mentions the RMAN Active Duplicate method However this method impacts source database performance and creates heavy network traffic between source and target database servers An alternative method for Active Duplicate is to use the RMAN cross platform backup method (Oracle MOS Note 10795631 ): 1 Take an EBS snapshot of the Oracle database on Windows Mount it in another Windows server in STARTUP MOUNT stage 2 Create an RMAN cold backup of the newly mounted Oracle database on Windows This step is to avoid error as mentioned in Oracle MOS Note 20033271 3 Copy the RMAN backup files to Linux using SFTP or SCP 4 On Oracle Linux issue the dup licate database for standby command using RMAN backup files This step replaces the duplicate command in Step 3 of Oracle MOS Note 10795631 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 11 DUPLICATE TARGET DATABAS E FOR STANDBY BACKUP LOCATION='<full path of RMAN backup file location in Oracle Linux>' NOFILENAMECHECK; You can use SQL commands or DGMGRL to start Oracle Data Guard synchronization between the primary database on Windows and the standby database on Orac le Linux Refer to the role transition notes mentioned previously to switch the primary database from Windows to Linux If the source database contains Oracle OLAP refer to Oracle MOS Note 3523061 It is recommended to back up the user created OLAP Analytical Workspace ahead of time using the Export utility Oracle RMAN Transportable Database Oracle recommends the Oracle RMAN TTS method when migrating from completely different operating systems If the un derlying chipset is different such as Sun SPARC and Intel then Oracle recommends you use the cross platform transportable tablespace (XTTS) method Different chipsets have different endian formats Endian format dictates the order in which the bytes are stored underneath The Sun SPARC chipset stores bytes in big endian format while the Intel series stores them in little endian format TTS can be used when both Windows and Oracle Linux are running on same chipset eg Intel 64 bit Oracle has published a detailed blog post to migrate from the Windows (Intel) platform to the Linux (Intel) platform using RMAN TTS This method migrates the entire database at once instead of just individual tablespaces This method involves making your source Windows database read only and requires downtime Hence this method is advised for small and medium sized databases under 400 GB and wherever downtime can be accommodated For large databases run Oracle Data Guard in heterogeneous mode Oracle RMAN Cross Platform Transportable Database If you are migrating from different endian platforms like Sun/HP refer to Oracle MOS Note 3715561 for detailed step bystep instructions This method uses the XTTS method in RMAN It is possible to reduce downtime if you are migrating from Oracle Database 11 g or later using cross platform incremental backup Refer to Oracle MOS Note 13895921 for This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 12 instructions Review the Oracle whitepaper Platform Migration Using Transportable Tablespaces: Oracle Database 11g Release 1 on using RMAN 11g XTTS best practices and recommendations Oracle Data Pump Export/Import Utilities Oracle Data Pump Export/Import utilities can migrate from different endian formats It is a more time consuming method than Oracle RMAN but it is useful when you want to combine it with other variables such as when you want to mig rate certain schemas from Oracle 10g on an HP UX on premises server to Oracle 11g on Oracle Linux on AWS To reduce the downtime leverage parallel methods in Oracle Data Pump Export/ Import See the Oracle whitepaper Parallel Capabilities of Oracle Data Pump for recommendations on how to leverage it AWS Database Migration Service AWS Databas e Migration Service (DMS) is a managed service that you can use to migrate data from on premises or your Oracle DB instance to another EC2 or RDS instance AWS DMS supports Oracle versions 10g 11g 12c and 18c in both the source and the target instances A key advantage of AWS DMS is that it requires minimal downtime AWS SCT can be used together with AWS DMS It analyzes the source database and generates a report on which automatic and manual migration steps will be required for the given source and targe t combination This report helps in planning your migration activities AWS DMS does not migrate PL/SQL objects but AWS SCT helps you locate them and alerts you on the migration step needed You can use Oracle Data Pump Export/Import filters to migrate t he PL/SQL objects AWS DMS supports Oracle ASM at source AWS DMS can also replicate data from the source database to the destination database on an on going basis You can also use it to replicate the data until cutover is complete AWS DMS can use both Oracle LogMiner and Oracle Binary Reader for change data capture See Using an Oracle Database as a Source for AWS DMS for available configuration options and known limitations for source Oracle database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 13 Other Database Migration Methods There are other methods that can help in database migration across operating system platforms Oracle MOS Note 7332051 provides a generic overview of some of the methods like RMAN Duplicate or Oracle GoldenGate Some enterprise applications have additional tools and migration paths that are specific to their own applications Finally t here are independent software vendors that offer database migration tools on the AWS Marketplace One of these tools may be the best fit for your scenario Enterprise Application Considerations SAP Applications If you’re running your SAP applications with Oracle database you have many methods for migrating from one operating system to another All of the following migration methods are supported by SAP Note: You must follow standard SAP system copy/migration guidelines to perform your migration SAP requires that a heterogeneous migration be performed by SAP certified technical consultants Check with SAP support for more details SAP Software Logistics Toolset Softwa re Provisioning Manager (SWPM) is a Software Logistics (SL) Toolset provided by SAP to install copy and transform SAP products based on SAP NetWeaver AS ABAP and AS Java You can use SWPM to perform both heterogeneous and homogen eous migrations If the e ndian type of your source operating system is the same as the target then your migration is considered a homogen eous system copy Otherwise it is considered a heterogeneous system copy or migration The SWPM tool uses R3load export/import methodology to copy or migrate your database If you need to minimize the migration downtime consider using the parallel export/import method provided by SWPM See the Software Logistics Toolset documentation page for more details Oracle Lifecycle Migration Service Oracle developed a migration service called Oracle ACS Lifecycle Management Service (formerly known as Oracle to Oracle Online [Triple O ] and Oracle to Oracle [O2O ]) to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 14 help SAP customers migrate their exi sting Oracle database to another operating system With this service you can migrate your database while the SAP system is online which minimizes the downtime required for migration This service uses Oracle’s builtin functionality and Oracle Golden Gate This is a paid service and may require additional licensing to use Oracle Golden Gate See SAP OSS Note 1508271 for more details This service only helps with the database migratio n step —you still need to complete all the other SAP standard migration steps to complete the migration Oracle RMAN For SAP applications you can use native Oracle functionality to migrate your database to another platform You can use the Oracle RMAN tran sportable database feature to migrate the database when the endian type of source and target platform are the same Starting with Oracle 12c Oracle RMAN cross platform transportable database and tablespace features can be used to migrate a database across platforms with different endian types See SAP OSS Notes 105047 and 1367451 for more details Oracle RMAN only helps with the database migration step —you still need to complete all the other SAP standard migration steps to complete the migration The following table summarizes all the migration methods available to migrate your Oracle database to the Oracle Linux platform We recommended that you evaluate all the available methods and choose the one that best suits your env ironment Table 4: Migration options for Oracle database to Oracle Linux Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) RHEL / SLES Yes Yes Yes Yes Oracle Linux Yes Yes Yes Yes Solaris (x86) Yes Yes Yes Yes AIX / HP UX / Solaris (SPARC) No Yes Yes Yes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 15 Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) Windows No Yes Yes Yes Oracle E Business Suite For Oracle E Business Suite (EBS) applications you can follow the various migration paths previously described in the document The following migration methods are available to migrate the database tier of Oracle E Business Suite: Table 5: Migration methods for Oracle E Business Suite Source Operating System Amazon EBS Snapshot Oracle Data Guard RMAN Transportable Database RHEL Yes Yes Yes SLES Yes Yes Yes Solaris x86 No Yes Yes IBM AIX / HP UX / Solaris SPARC No No No Windows No Yes Yes If you are running on IBM AIX/HP UX/Solaris SPARC consider other database migration methods such as using the Export/Import utilities Once you have migrated your database complete the following post migration steps: • Environment variables in new Oracle home include PERL5LIB PATH and LD_LIBRARY_PATH • Ensure NLS directory $ORACLE_HOME/nls/data/9idata is available in the new Oracle home This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 16 • Implement and run auto config on the new Oracle home Once db tier auto config is complete you must run auto config on the application tier as well RMAN Transportable Database The RMAN transportable database converts the source database and creates new data files compatible for the destination operating system This step involves placing the source database into read only mode RMAN transportable database consumes more downtime One option to minimize downtime is to use physical standby of th e source database for RMAN transportable database conversion step RMAN allows parallel conversion of the data files thereby reducing the conversion time See the Oracle whitepaper Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 for more details on platform migration using RMAN transportable database feature Oracle maintains a master note ( Oracle MOS Note 13772131 ) for platform migration • For Oracle EBS 11i see Oracle MOS Note 7293091 • For Oracle EBS R120 and R121 see Oracle MOS Note 7347631 • For Oracle EBS R122 see Oracle MOS Note 20111691 Migrating From 32 Bit to 64 Bit For Oracle EBS applications we recommend that you keep the bit level of the operating systems the same eg RHEL 32 bit to Oracle Linux 32 bit in order to reduce variability in the migration process If there is a driving need to change the bit level o f the operating system during the migration Oracle recommends that you follow a two step approach in migrating the system to 64 bit The two step migration path consists of setting up the application tier and then migrating the database tier See MOS Not e 4715661 for detailed steps and post migrations checks on converting Oracle E Business Suite from 32 bit to 64bit Linux Containers You can move your Oracle E Busin ess Suite R122 application tier to containers running Oracle Linux Linux containers provide the flexibility to scale on demand depending on the workloads The application tier of Oracle E Business Suite 122 is certified on Oracle Linux containers runnin g UEK3 R3 QU6 kernel Oracle EBS application tier containers must be created with a privilege flag See MOS Note 13307011 for further requirements and documentation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 17 Oracle Fusion Middleware For Oracle application tier products such as Fusion Middleware refer to the respective MOS Upgrade Support notes for the Oracle recommended path to migrate the OS platform For Fusion Middleware 11g see MOS Support Note 10732061 for the platform migration path For Oracle applications such as Oracle E Business Suite PeopleSoft or similar products check their respective Oracle MOS platform m igration notes or seek direction from the Oracle Support team for the recommended migration path for the particular product and version Conclusion Your choice of migration path depends on your application your specific business needs and your SLAs If y ou are already using AWS Amazon EBS snapshots are the best choice if the prerequisites are satisfied Whichever method you choose for the migration path repeated testing and validation is necessary for a successful and seamless migration Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect – Global ISV AWS • John Bentley Technical Account Manager AWS • Jayaraman Vellore Sampathkumar AWS Oracle Solutions Architect AWS • Yoav Eilat Sr Product Marketing Manager AWS Document Revisions Date Description January 2020 Updated for latest technologies and services Month 2018 First publication
|
General
|
consultant
|
Best Practices
|
Migrating_to_Apache_HBase_on_Amazon_S3_on_Amazon_EMR
|
This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/migrate apachehbases3/migrateapachehbases3html Migrating to Apache HBase on Amazon S3 on Amazon EMR Guidelines and Best Practices May 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Introduction to Apache HBase 1 Introd uction to Amazon EMR 2 Introduction to Amazon S3 3 Introduction to EMRFS 3 Running Apache HBase directly on Amazon S3 with Amazon EMR 3 Use cases for Apache HBase on Amazon S3 5 Planning the Migration to Apache HBase on Amazon S3 6 Preparation task 7 Selecting a Monito ring Strategy 7 Planning for Security on Amazon EMR and Amazon S3 9 Encryption 9 Authentication and Authorization 10 Network 12 Minimal AWS IAM Policy 12 Custom AMIs and Applying Security Controls to Harden your AMI 13 Auditing 14 Identifying Apache HBase and EMRFS Tuning Options 16 Apache HBase on Amazon S3 configuration properties 16 EMRFS Configuration Properties 36 Testing Apache HBase and EMRFS Configuration Values 37 Options to approach performance testing 37 Preparing the Test Environment 39 Preparing your AWS account for performance testing 39 Preparing Amazon S3 for your HBase workload 40 Amazon EMR Cluster Setup 42 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Troubleshooting 45 Migrating and Restoring Apache HBase Tables on Apache HBase on Amazon S3 46 Data Migration 46 Data Restore 47 Deploying into Production 48 Preparing Amazon S3 for Production load 48 Preparing the Production environment 48 Managing the Production Environment 49 Operationalization tasks 49 Conclusion 52 Contributors 52 Further Reading 52 Document Revisions 53 Appendix A: Command Reference 54 Restart HBase 54 Appendix B: AWS IAM Policy Reference 55 Minimal EMR Service Role Policy 55 Minimal Amazon EMR Role for Amazon EC2 (Instance Profile) Policy 58 Minimal Role Policy for User Launchi ng Amazon EMR Clusters 60 Appendix C: Transparent Encryption Reference 63 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper provides an overview of Apache HBase on Amazon S3 and guides data engineers and software developers in the migration of an on premises or HDFS backed Apache HBase cluster to Apache HBase on Amazon S3 The whitepaper offers a migration plan that includes detailed steps for each stage of the migration including data migration performance tuning and operational guidance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Page 1 Introduction In 2006 Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services —now commonly known as cloud computing One of the key benefits of cloud computing is the opportunit y to replace upfront capital infrastructure expenses with low variable costs that scale with your business With the cloud businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Today AWS provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world Many businesses have been taking advantage of the unique properties of the cloud by migrating their existing Apache Hadoop workloads incl uding Apache HBase to Amazon EMR and Amazon Simple Storage Service ( Amazon S3 ) The ability to separate your durable storage layer from your compute layer have flexible and scalable compute and have the ease of inte gration with other AWS services provide s immense benefits and open s up many opportunities to reimagine your data architectures Introduction to Apache HBase Apache HBase is a massively scalable distributed big data store in the Apache Hadoop ecosystem It is an open source non relational versioned database that runs on top of the Apache Hadoop Distributed File System (HDFS) It is built for random strictly consistent realtime access for tables with billions of rows and millions of columns It has tight integration with Apache Hadoop Apache Hive and Apache Phoenix so you can easily combine massively parallel analytics with fast data access through a variety of interfaces The Apache HBase data model throughput and fault tolerance are a good match for workloads in ad tech web analytics financial services applications using time series data and many more Here are some of the features and benefits when you run Apache HBase : • Strongly consistent reads and writes – when a writer returns all of the readers will see the same value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 2 • Scalability – individual Apache HBase tables comprise billions of rows and millions of columns Apache HBase stores data in a sparse form to conserve space You can use column families and column prefixes to organize your schemas and to indicate to Apache HBase that the members of the family have a similar access pattern You can also use timestamps and versioning to retain old versions of cells • Graphs and time series – you can use Apache HBase as the foundation for a more specialized data store For example you can use Titan for graph databases and OpenTSDB for time series • Coprocessors – you can write custo m business logic (similar to a trigger or a stored procedure) that runs within Apache HBase and participates in query and update processing ( refer to Apache HBase Coprocessors to learn more) • OLTP a nd analytic workloads you can run massively parallel analytic workloads on data stored in Apache HBase tables by using tools such as Apache Hive and Apache Phoenix Apache Phoenix provides ACID transaction capabilities via standard SQL and JDBC APIs For details on how to use Apache Hive with Apache HBase refer to Combine NoSQL and Massively Parallel Analytics Using Apache HBase and Apache Hive on Amazon EMR You also get easy provisioning and scaling access to a pre configured installation of HDFS and automatic node replacement for increased durability Introduction to Amazon EMR Amazon EMR provides a managed Apache Hadoop framework that makes it easy fast and cost effective to process vast amounts of data across dynamically scalable Amazon Elastic Compute Cloud (Amazon EC2 ) instances You can also run other popular distributed engines such as Apache Spark Apache Hive Apache HBase Presto and Apache Flink in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB Amazon EMR securely and reliably handles a broad set of big data use cases including log analysis web indexing data transformations (ETL) streaming machine learning financial analysis scientific simulation and bioinformatics For an overview of Amazon EMR refer to Overview of Amazon EMR Architecture and Overview of Amazon EMR This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 3 Introduction to Amazon S3 Amazon Simple Storage Service (Amazon S3) is a durable highly available and infinitely scalable object storage with a simple web service interface to store and retrieve any amount of d ata from anywhere on the web With regard to Apache HBase and Apache Hadoop s toring data on Amazon S3 gives you more flexibility to run and shut down Apache Hadoop clusters when you need to Amazon S3 is commonly used as a durable store for HDFS workloads Due to the durability and performance scalability of Amazon S3 Apache Hadoop workloads that store data on Amazon S3 no longer require the 3x replication as when the data is stored on HD FS Moreover you can resize and shut down Amazon EMR clusters with no data loss or point multiple Amazon EMR clusters at the same data in Amazon S3 Introduction to EMRFS The Amazon EMR platform consists of several layers each with specific functionality and capabilities At the storage layer in addition to HDFS and the local file system Amazon EMR offers the Amazon EMR File System (EMRFS) an implementation of HDFS that all Amazon EMR clusters use for reading and writing files to Amazon S3 EMRFS feat ures include data encryption and data authorization Data encryption allows EMRFS to encrypt the objects it writes to Amazon S3 and to decrypt them during read s Data authorization allows EMRFS to use different AWS Identify and Access Management ( IAM ) roles for EMRFS requests to Amazon S3 based on cluster users groups or the location of EMRFS data in Amazon S3 For more informatio n refer to Using EMR File System (EMRF S) Running Apache HBase directly on Amazon S3 with Amazon EMR When you run Apache HBase on Amazon EMR version 520 or later you can enable HBase on Amazon S3 By using Amazon S3 as a data store for Apache HBase you can separate your cluster’s storage a nd compute nodes This enables you to save costs by sizing your cluster for your compute requirements instead of paying to store your entire dataset with 3x replication in the on cluster HDFS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 4 Many customers have taken advantage of the numerous benefits o f running Apache HBase on Amazon S3 for data storage including lower costs data durability and easier scalability Customers such as Financial Industry Regulatory Agency ( FINRA ) have lowered their costs by 60% by moving t o an HBase on Amazon S3 architecture in addition to the numerous operational benefits that come with decoupling storage from compute and using Amazon S3 as the storage layer HBase on Amazon S3 Architecture An Apache HBase on Amazon S3 allows you to launch a cluster and immediately start querying against data within Amazon S3 You don’t have to configure replication between HBase on HDFS clusters or go through a lengthy snapshot restore process to migrate the data off you r HBase on HD FS cluster to another HBase on HDFS cluster Amazon EMR configures Apache HBase on Amazon S3 to cache data in memory and on disk in your cluster delivering fast performance from active compute nodes You can quickly and easily scale out or scale in comput e nodes without impacting your underlying storage or you can terminate your cluster to save costs and quickly re store it without having to recover using snapshots or other methods This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 5 Using Amazon EMR version 570 or later you can set up a read replica clu ster which allows you to achieve higher read availability by distributing reads across multiple clusters Use cases for Apache HBase on Amazon S3 Apache HBase on Amazon S3 is recommended for applications that require high availability of reads and do not require high availability of writes Apache HBase on Amazon S3 can be configured to achieve high requests per second for Apache HBase’s API calls This configuration together with the proper instance type and cluster size allow s you to find the optimal Apache HBase on Amazon S3 configuration values to support similar requests per second as your HDFS backed clu ster Moreover as Amazon S3 is used as a storage layer you can decouple storage f rom compute have the flexibility to bring up/down clusters as needed and considerably r educe costs of running your Apache HBase cluster Applications that require high availability of reads are supported by Apache HBase on Amazon S3 via Read Replica Clus ters pointing to the same Amazon S3 location Although Apache HBase on Amazon S3 Read Replica Clusters are not part of this whitepaper see Further Reading for more details Since Apache HBase’s Write Ahead Log (WAL) is stored in the cluster i f your application requires support for high availability of writes Apache HBase on HDFS is recommended Specifically you can set up Apache HBase on HDFS with multimaster on an Amazon EC2 custom installation or set up Apache HBase on HDFS on Amazon EMR with an HBase on HDFS replica cluster on Amazon EMR Apache HBase on Amazon S3 is recommended i f your application does not require support for high availability of writes and can tolerate failures during writes/updates If you would like to mitigate the impact of Amazon EMR Master node failure s (or Availability Zone failures that can cause the termination of the Apache HBase on Amazon S3 cluster or any temporary degradation of service due to Apache HBase RegionServer operat ional/transient issues ) we This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 6 recommend that your pipeline architecture relies on a stream/messaging platform upstream to the Apache HBase on Amazon S3 cluster We recommend that you always use the latest Amazon EMR release so you can benefit from all change s and features continuously added to Apache HBase Planning the Migration to Apache HBase on Amazon S3 To migrate an existing Apache HBase cluster to an Apache HBase on Amazon S3 cluster consider the following activities to help scope and optimize performance for Apache HBase on Amazon S3: • Select a strategy to monitor your Apache HBase cluster's performance • Plan for security on Amazon EMR and Amazon S3 • Identif y Apache HBase and EMRFS tuning option s • Test Apache HBase and EMRFS configuration values • Prepar e the test environment o Prepar e your AWS account for performance testing o Prepar e Amazon S3 for your Apache HBase workload o Set up Amazon EMR cluster o Troubleshoot • Migrat e and restore Apache HBase tables on HBase on Amazon S3 o Migrate d ata o Restore d ata • Deploy into production o Prepar e Amazon S3 for production load o Prepar e the production environment • Manag e the production environment o Manage o perationalization tasks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 7 Preparation task Before the migr ation starts we recommend that you select a strategy t o monitor the performance of your cluster Selecting a Monitoring Strategy We recommend you use an enterprise third party moni toring agent or Ganglia to guide you during the tuning of Apache HBase on Amazon S3 This agent is helpful to understand the changes in performance when changing Apache HBase properties during your tuning process Moreover this monitori ng allow s quick detection of operational issues when the cluster is in production Monitoring Apache HBase subsystems and dependent systems To measure the overall performance of Apache HBase monitor metrics such as those around Remote Procedure Calls (RPCs ) and the Java virtual machine (JVM ) heap In addition to Apache HBase metrics collect metrics from dependency systems such as HDFS the OS and the network Monitoring the write path To measure the performance of the write path monitor the metrics for the WAL HDFS (on Apache HBase on Amazon S3 on Amazon EMR WALs are on HDFS) Mem Store flushes compactions garbage collections and procedure metrics of a related procedu re Monitoring the read path To measure the performance of the read path monitor the metrics for the block cache and the bucket cache Specifically monitor the number of evictions Garbage Collection (GC) time cache size and cache hit s/misses Monitor ing with a thirdparty tool Apache HBase supports exporting metrics via Java Management Extensions (JMX ) Most third party monitoring agents can then be configured to collect metrics via JMX For more information refer to Using with JMX Section Configuring HBase to expose metrics via JMX will provide the configurations to export Apache HBase metrics via JMX on an Apache HBase on Amazon S3 cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 8 Note that the Apache HBase Web UI allows you access to the available metrics In the UI select a Region Server or the Apache HBase Master and then click the “Metrics Dump” tab This tab provide s all available metrics collected from the JMX bean and exposes the metr ics in JSON format For more details on the metrics expose d by Apache HBase refer to Metr icsRegionServerSourcejava Use the following steps to add monitoring int o your Amazon EMR Cluster: • Create an Amazon EMR bootstrap action to set up the agent of any enterprise monitoring tool used in your environment (For more information and example bootstrap actions refer to Create Bootstrap Actions to Install Additional Software • Create a dashboard in your enterprise monitoring tool with the metri cs to monitor per each Amazon EMR Cluster • Create unique tags for each cluster This tagging avoid s multiple clusters writing to the same dashboard In addition to monitoring the Amazon EMR Cluster at every layer of the stack have the monitoring dashboar d for your application’s API available for use during the performance tests for Apache HBase This dashboard keeps track of the performance of the application APIs that rely on Apache HBase Monitoring Cluster components with Ganglia The Ganglia open sourc e project is a scalable distributed system designed to monitor clusters and grids while minimizing the impact on their performance When you enable Ganglia on your cluster you can generate reports and view the performance of the cluster as a whole as we ll as inspect the performance of individual node instances For more information about the Ganglia open source project refer to http://gangliainfo/ For more information about using Ganglia with Amazon EMR clusters refer to Ganglia in Amazon EMR Documentation Configuring Ganglia is out side the scope of this whitepaper Note that Ga nglia produce s high amounts of data for large clusters Consider this information when sizing your cluster If you choose to use Ganglia to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 9 monitor your production cluster make sure to thoroughly understand Ganglia functionality and configuration properties Planning f or Security on Amazon EMR and Amazon S3 Many customers in regulated industries such as financial services or healthcare require security and compliance controls around their Amazon EMR clusters and Amazon S3 data storage It is important to consider thes e requirements as part of an overall data strategy that adheres to any regulatory or internal data security requirements in an industry such as PCI or HIPAA This section cover s a variety of security best practices around configuring your Amazon EMR envir onment for HBase on Amazon S3 Encryption There are multiple ways to encrypt data at rest in your Amazon EMR clusters If you are using EMRFS to query data on Amazon S3 you can specify one of the following options: • SSE S3: Amazon S3 manage s encryption keys for you • SSE KMS: An AWS Key Management Service (KMS) customer master key (CMK) encrypt s your data server side on Amazon S3 • CSE KMS/CSE C: The encryption and decryption takes place client side on your Amazon EMR cluster and the encrypted object is store d on Amazon S3 You can use keys provided by AWS KMS (CSE KMS) or use a custom Java class that provides the master key (CSE C) When you consider this encryption mode you should think about the overall ecosystem of tools you will use to access your data and if t hese tools support CSE KMS/CSE C In the context of HBase on Amazon S3 many customers use SSE S3 and SSE KMS as their methods of encryption because CSE encryption requires add itional key management Although the bulk of the data is stored on Amazon S3 you still need to consider the following options for local disk encryption: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 10 • Amazon EMR Security Configuration : Amazon EMR gives you the ability to encrypt your storage volumes using local disk encryption It uses a combination of open source HDFS encryption as well as LUKS encryption If you want to use this feature you must specify an AWS KMS key ARN or provide a custom Java class with the encryption artifacts • Custom AMI : You can create a Custom AMI for Amazon EMR and specify an Amazon EBS volume encryption to encrypt both your boot and storage volumes Amazon EMR security configurations allow you to choose a method for encrypting data using Transport Layer Security (TLS) You can choose to : • Manually create PEM certificates zip them in a file and reference from Amazon S3 or • Implement a certificate custom provider in Java a nd specify the Amazon S3 path to the JAR For more information on how these certificates are used with different big data technologies refer to In Transit Data Encryption with Amazon EMR Note that traffic between Amazon S3 and cluster nodes is encrypted using TLS Th is encryption is enabled automatically Authentication and Authorization Authentication and authorization are two crucial components that must be considered when controlling access to data Authentication is the verification of an entity whereas authoriz ation is checking whether the entity actually has access to the data or resources it is asking for Another way of looking at it is that authentication is the “are you really who you say you are” check and authorization is “do you actually have access to w hat you're asking for” check For example Alice can be authenticated as being Alice but this does not necessarily mean that Alice has authorization or access to look at Bob's bank account Authentication on Amazon EMR Kerberos a network authentication protocol created by the Massachusetts Institute of Technology (MIT) uses secret key cryptography to provide strong authentication and avoid sensitive information such as passwords or other This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 11 credentials being sent over the network in an unencrypted and e xposed format With Kerberos you maintain a set of services (known as a realm) and users that need to authenticate (known as principals) and then provide a means for these principals to authenticate You can also integrate your Kerberos setup with other r ealms For example you can have users authenticate from an Active Directory domain or LDAP namespace and have a cross realm trust set up such that these authenticated users can be seamlessly authenticated to access your Amazon EMR clusters Amazon EMR in stalls open source Apache Hadoop ecosystem applications on your cluster meaning that you can leverage the existing security features in these products For example you can enable Kerberos authentication for YARN giving user level authentication for appl ications running on YARN such as HBase You can configure Kerberos on an Amazon EMR cluster (known as Kerberizing) to provide a means of authentication for users who use your clusters We recommend that you become familiar with Kerberos concepts before configuring Kerberos on Amazon EMR Refer to Use Kerberos Authentication on the Amazon EMR documentation page See Further Reading for blog posts that show you how to configure Kerberos on your Amazon EMR Cluster Authorization on Amazon EMR Authorization on Amazon EMR falls into three general categories: • Object level authorization ag ainst objects in Amazon S3 • Component specific functionality that is built in (such as Apache HBase Authorization ) • Tools that provide an intermediary access layer between users running commands on Apache Hadoop components and the storage layer (such as Apache Ranger) (This category is outside the scope of this whitepaper) Object level Authorization Prior to Amazon EMR version 5100 the AWS Identity and Access Management (IAM) role attached to the Amazon EC2 instance profile role on Amazon EMR clusters determine d data access in Amazon S3 Data access to Amazon S3 could only be granular at the cluster level making it difficult to have This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 12 multiple users with potentially different levels of access to data touching the same cluster EMRFS fine grained authorization was introduced w ith Amazon EMR versions 5100 and later This authorization allows you to specify the AWS IAM role to assume at the user or group level when EMRFS is accessing Amazon S3 This allows for fine grained access control for Amazon S3 on multi tenant Amazon EMR clusters In addition it makes it easier to enable cross account Amazon S3 access to data For more information on how to configure your security configurations and AWS IAM roles appropriately refer to Configure AWS IAM Roles for EMRFS Requests to Amazon S3 and Build a Multi Tenant Amazon EMR Cluster with Kerberos Microsoft Active Directory Integration and AWS IAM Roles for EMRFS HBase Authorization Authorization on Apache HBase on Amazon S3 is feature equivalent to Apache HBase on HDFS with the ability to set authorization rules at the table column and celllevel Note that access to the Apache HBase web UIs is not restricted even when Kerberos is used Network The network topology is al so important when designing for security and privacy We recommend placing your Amazon EMR clusters in private subnets with only outbound internet access via NAT Security groups control inbound and outbound access from your individual instances With Ama zon EMR you can use both Amazon EMR managed security groups as well as your own to control network access to your instance By applying the principle of least privilege to your security groups you can lock down your Amazon EMR cluster to only the applications and/or individuals who need access Minimal AWS IAM Policy By default the AWS IAM policies that are associated with Amazon EMR are generally permissive in order to allow customers to easily integrate Amazon EMR with other AWS services When securing Amazon EMR a best practice is to start from the minimal set of permissions required for Amazon EMR to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 13 function and add permissions as necessary Since HBase on Amazon S3 depends on Amazon S3 as a storage medium it is important to ensure that bucket policies are also scoped correctly such that HBase on Amazon S3 can function while also being secure The Appendix B: AWS IAM Policy Reference at the end of this paper includes three policies that are scoped around what Amaz on EMR minimally requires for basic operation These policies could be further minimized /restricted by removing actions related to spot pricing and autoscaling Custom AMIs and Applying Security Controls to Harden your AMI Custom AMIs are another approach you can use to harden and secure your Amazon EMR cluster Amazon EMR uses an Amazon Linux Amazon Machine Image (AMI) to initialize Amazon EC2 instances when you create and launch a cluster The AMI contains the Amazon Linux operating system other softwar e and configurations required for each instance to host your cluster applications By default when you create a cluster you don't need to think about the AMI When Amazon EC2 instances in your cluster launch Amazon EMR starts with a default Amazon Linu x AMI that Amazon EMR owns runs any bootstrap actions you specify and then installs and configures the applications and components that you select Alternatively if you use Amazon EMR version 570 or later you can specify a custom Amazon Linux AMI whe n you create a cluster and customize its root volume size as well When each Amazon EC2 instance launches it starts with your custom AMI instead of the Amazon EMR owned AMI Specifying a custom AMI is useful for the following cases: • Encrypt the Amazon EBS root device volumes (boot volumes) of Amazon EC2 instances in your cluster For more information refer to Creat ing a Custom AMI with an Encrypted Amazon EBS Root Device Volume • Preinstall applications and perform other customizations instead of using bootstrap actions which can improve cluster start time and streamline the startup work flow This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 14 • Implement more sophis ticated cluster and node configurations than bootstrap actions allow Using a custom AMI as opposed to a bootstrap action can allow you to have your hardening steps pre configured into the images you use rather than having to run the bootstrap action sc ripts on instance provision time You don't have to choose between the two —you can create a custom AMI for the common less likely to change security characteristics of your cluster and leverage bootstrap actions to pull the latest configurations /scripts t hat might be cluster specific One approach many of our customers take is to apply the Center for Internet Security (CIS) benchmarks to harden their Amazon EMR clusters For more details refer to A step bystep checklist to secure Amazon Linux It is important to verify each and every control for necessity and function test against your requirements when applying these benchmarks to your clusters Auditing The ability to audit compute envir onments is a key requirement for many customers There are a variety of ways that you can support this requirement within Amazon EMR: • For Amazon EMR version 5140 and later EMR File System (EMRFS) Amazon EMR’s connector for Amazon S3 supports auditing of users who ran queries that accessed data in Amazon S3 through EMRFS This feature is enabled by default and passes on user and group information to audit logs like AWS CloudTrail providing you with comprehensive request tracking • If it exists application specific auditing can be configured and implemented on Amazon EMR • You can use tools such as Apache Ranger to implement another layer of auditing and authorization • AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service is integrated with Amazon EMR AWS CloudTrail captures all API calls for Amazon EMR as events The calls captured include calls from the Amazon EMR consol e and code calls to the Amazon EMR API operations If you create a trail you can enable This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 15 continuous delivery of AWS CloudTrail events to an Amazon S3 bucket including events for Amazon EMR • You can also audit the Amazon S3 objects that Amazon EMR is acces sing via Amazon S3 access logs AWS CloudTrail only provide s logs for AWS API calls so if a user runs a job that reads/writes data to Amazon S3 the Amazon S3 data that was accessed by Amazon EMR won’t appear in AWS Cloud Trail By using Amazon S3 access l ogs you can comprehensively monitor and audit access against your data in Amazon S3 from anywhere including Amazon EMR • Because you have full control over your Amazon EMR cluster you can always install your own third party agents or tooling via bootstra p actions or custom AMIs to help support your auditing requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 16 Identifying Apache HBase and EMRFS Tuning Options Apache HBase on Amazon S3 configuration properties This section helps you optimize components that support the read/write path for your application access patterns by identifying the components and properties to configure Specifically the goal of tuning is to prepare the initial configurations to influence cluster behavior storage footprint behavior and individual components behav ior that support the read and write paths The whitepaper covers only HBase tuning properties that we re critical to many HBase on Amazon S3 customers during migration Make sure to test any additional HBase configuration properties that have been tuned on your HDFS backed cluster but not included in this section You also need to tune EMRFS properties to prepare your cluster for scale This whitepaper should be used together with existing resources or materials on best practices and operational guidelines for HBase For a detailed description of the HBase properties mentioned on this document refer to HBase default configurations and HBase defaultxml ( HBase 146) For more details on the metrics mentioned on this document refer to MetricsRegionServerSourcejava (HBase 146) To monitor changes to some of the properties mentioned on this document you require access to the Logs for the master and specific Region Servers To access the HBase logs during tuning you can use the HBase Web UI First select the HBase Master or the specific RegionServer and then click the “Local Logs ” tab Or you can SSH to the parti cular instance that hosts th e RegionServer or HBase Master and visualize the last lines added to the logs under /var/log/hbase Next we identify the several settings on HBase and later on EMRFS to take into consideration during the tuning stage of the m igration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 17 For some of the HBase properties we propose a starting value or a setting for others you will need to iterate on a combination of configurations during performance tests to find adequate values All of the configuration settings that you decide to set can be applied to your Amazon EMR Cluster via a configuration object that the Amazon EMR service uses to configure HBase and EMRFS when d eploying a new cluster For more details s ee Applying HBase and EMRFS Configurations to the Cluster Speed ing up the Cluster initialization Use t he properties that follow when you want to speed up the clust er’s startup time speed up region assignments and speed up region initialization time These properties are associated with the HBase Master and the HB ase RegionServer HBase master tuning hbasemasterhandlercount • This property s ets the number of RPC handlers spun up on the HBase Master • The default value is 30 • If your cluster has thousands of regions you will likely need to increase this value Monitor the que ue size (ipcqueuesize ) min and max time in queue total calls time min and max proc essing time and then iterate on this value to find the best value for your use case • Customers at the 20000 region scale have configured this property to 4 times the default value HBase RegionServer tuning hbaseregionserverhandlercount • This property sets the number of RPC handlers created on RegionServers to serve requests For more information about this configuration setting refer to hbaseregionserverhandlercount This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 18 • The default value is 30 • Monitor the number of RPC Calls Queued the 99th percentile latency for RPC calls to stay in queue and RegionServer memory Iterate on this value to find the best value for your use case • Customer s at the 20000 region scale hav e configured this property to 4 times the default value hbaseregionserverexecutoropenregionthreads • This property sets the number of concurrent threads for region opening • The default value is 3 • Increase this value if the number of regions per RegionServer is high • For clusters with thousands of regions i t is common to see this value at 10–20 times the default Prevent ing initialization loops The default values of the properties that follow may be too conservative for some use cases Depending on the numbe r of regions number of RegionS ervers and the settings you have chosen to control initialization and assignment times the default values for the master timeout can prevent your cluster from starting up Relevant Master initialization timeouts hbasemasterinitializationmonitortimeout • This property sets t he amount of time to sleep in milliseconds before checking the Master’s initialization status • The default value is 900000 (15 minutes) • Monitor masterFinishedInitializationTime and the HBase Master logs for a “master failed to complete initialization ” time out message Iterate on this value to find the best value for your use case hbasemasternamespaceinittimeout This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 19 • This property sets the time for the master to wait for the namespace table to initialize • The default value is 300000 (5 minutes) • Monitor the HBase Master logs for a “waiting for namespace table to be assigned ” timeout message Iterate on this value to find the best value for your use case Scaling to a high number of regions This property allows the HBase Master to handle high number of regions • Set hbaseassignmentuse zk to false • For detailed information refer to HBase ZK less Region Assignment Getting a balanced Cluster after initialization To ensure that the HBase Master only allocate s regions when a target number of RegionS ervers is available tune the following properties : hbasemasterwaitonregionserversmintostart hbasemasterwaitonregionserversmaxtostart • These properties set the minimum and maximum number of RegionServers the HBase Master will wait for before starting to assign regions • By default hbasemasterwaitonregionservers mintostart is set to 1 • An adequate value for the min and max is 90 to of the total number of RegionS ervers A high value for both min and max result s in a more uniform distribution of regions across RegionServers hbasemasterwaitonregionserverstimeout hbasemasterwaitonregionserversinterval • The timeout property sets the time the master will wait for RegionServers to check in The default value is 4 500 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 20 • The interval property sets the time period use d by the master to decide if no new RegionServers have checked in The default value is 1500 • These properties are especially relevant for a cluster with a large number of regions • If your use case requires aggressive initialization times these propert ies can be set to lower values so that the condition that is dependent on these properties is evaluated earlier • Customer s at the 20000 region scale and with a requirement of low initial ization time have set timeout to 400 and interval to 300 • For more information on the condition used by the master to trigger allocation refer to HBASE 6389 Preventing timeouts during Snapshot operations Tune the following properties t o prevent timeouts during snapshot operations : hbasesnapshotmastertimeoutmillis • This property s ets the time the master will wait for a snapshot to conclude This property is especially relevant for tables with a large number of regions • The default value is 300000 (5 minutes) • Monitor the logs for snapshot timeout messages and iterate on this value • Customers at the 20000 region scale have set this property to 1800000 (30 minutes) hbasesnapshotthreadpoolmax • This property s ets the number of threads used by the snapshot manifest loader operation • Default value is 8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 21 • This value depend s on the instance type and the number of regions in your cluster Monitor snapshot average time CPU usage and your application API to ensure the value you choose does not impact application requests • Customers at the 20000 region scale have used 2–8 time s the default value for this property If you will be performing online snapshots while serving traffic set the following properties to prevent timeouts during the online snapshot operation hbasesnapshotregiontimeout • Sets the timeout for RegionServer s to keep threads in the snapshot request pool waiting • Default value is 300000 (5 minutes) • This property is especially relevant for tables with a large number of regions • Monitor memory usage on the RegionServers monitor the logs for snapshot timeout messages and iterate on this value Increasing this value consumes memory on the Region Servers • Customers at the 20000 region scale have used 1800000 (30 minutes) for this property hbasesnapshotregionpoolthreads • Sets the number of threads or snapsho tting regions on the RegionServer • Default value is 10 • If you decide to increase the value for this property consider setting a lower value for hbasesnapshotregiontimeout • Monitor snapshot average time CPU usage and your application API to ensure the value that you choose does not impact application requests This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 22 Running the balancer for specific periods to minimize the impact of region movements on snapshots Controlling the operation of the Balancer is crucial for smooth operation of the cluster These properties provide control over the balancer hbasebalancerperiod hbasebalancermaxbalancing • The hbasebalancerperiod property configures when the balancer runs and the hbasebalancermaxbalancing property con figures how long the balancer runs • These properties are especially relevant if you programmatically take snapshots of the data every few hours because the snapshot operation will fail if regions are being moved You can monitor the snapshot average time t o have more insight into the snapshot operation At a high level balancing requires flushing data closing the region moving the region and then opening it on a new Region Server For this reason for busy clusters consider running the balancer every cou ple of hours and confi guring the balancer to run for only one hour Tuning the Balancer Consider the following additional properties when configuring the balancer: • hbasemasterloadbalancerclass • hbasebalancerperiod • hbasebalancermaxbalancing We recom mend that you test your current LoadB alancer settings and then iterate on the configurations The default LoadBalancer is the Stochastic Balancer If you choose to use the default LoadBalancer refer to StochasticLoadBalancer for more details on the various factors and costs associated with this balancer Most use cases can use the default values Access Pattern considerations and read/write path tuning This section covers tuning the diverse HBase components that support t he read/ update/ write paths This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 23 To properly tune the components that support the read/update/write paths you start by identifying the overall access pattern of your application If the access pattern is read heavy then you can reduce the resources allocated to the write path The s ame guidelines apply for write heavy access patterns For mixed access patterns you should strive for a ba lance Tuning the Read Path This section identif ies the properties used t when tuning the read path The properties that follow are beneficial on both random read and sequential read access patterns Tuning the Size of the BucketCache The BucketCache is central to HBase on Amazon S3 The properties that follow set the overall s ize of the BucketCache per instance and allocate a percen tage of the total size of the BucketCache to speci alized areas such as single access BucketCache multiple access BucketCache and inmemory BucketCache For more details refer to HBASE 18533 The goal of this section is to configure the BucketCache to support your access pattern For an access pattern of random reads and sequential reads it is recommended to cache all data in the BucketCache which is stored in disk In other words each instance allocate s part of its disk to the bucket cache so that the total size of the BucketCache across all the instances in the cluster equals the size of the data on Amazon S3 To configure the BucketCache tune the following prop erties: hbasebucketcachesize • As a baseline set the BucketCache to a value equal to the size of data you would like cached • This property impact s Amazon S3 traffic If the data is not in the cache HBase must retrieve the data from Amazon S3 • If the BucketCache size is smaller than the amount of data being cached it may cause many cache evictions which will also increase overhead on GC Moreover it will increase Amazon S3 traffic Set the BucketCache This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 24 size to the total size of the dataset require d for your application ’s read access pattern • Take into account the available disk resources when setting this property HBase on Amazon S3 uses HDFS for the write path so the total disk available for the B ucket Cache must consider any storage requir ed by Apache Hadoop/OS/HDFS See the Amazon EMR Cluster Setup section for recommendations on sizing the cluster local storage for the Bucket Cache choosing storage type and its mix ( multiple disks versus a single larg e disk) • Monitor GC cache evictions metrics cache hit ratio cache miss ratio (you can use HBase UI to quickly access these metrics) to support the process of choosing the best value Moreover consider the application SLA requirements to increase or decrease the BucketCache size Iterate on this value to find the best value for your use case hbasebucketcachesinglefactor hbasebucketcachemultifactor hbasebucketcachememoryfactor • Note that the bucket areas follow the same areas as LRU cache A block initially read from Amazon S3 is populated in the single access area (hbasebucketcachesingle factor ) and consecutive accesses promote that block into the multi access area (hbasebucketcachemultifactor ) The in memory area is reserved for blocks loaded from column families flagged as IN_MEMORY (hbasebucketcachememoryfactor ) • By default these areas are sized at 25% 50% 25% of the total BucketCache size respectively • Tune this value according to the access pattern of your application • This property impact s Amazon S3 traffic For example if single access is more prevalent than multi a ccess you can reduce the size allocated to multi access If multi access i s common ensure that multi access areas are large enough to avoid cache evictions hbaserscacheblocksonwrite This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 25 • This property forces all blocks that are written to be added to the cache automatically Set this property to true • This property is especially relevant to read heavy workloads and setting it to true will populate the cache and reduce Amazon S3 traffic when a read request to the data is issued Setting this to false in rea dheavy workloads will result in reduced read performance and increased Amazon S3 activity • HBase on Amazon S3 uses the file base BucketC ache together with on heap cache BlockCache This setup is commonly referred as a combined cache The BucketCache only store s data blocks and the BlockCache stores bloom filters and indices The physical location of the file base BucketCache is the disk and the location of the BlockCache is the heap Prewarm ing the BucketCache HBase provides additional properties that control the prefetch of blocks when a region is opening This is a form of cache pre warming and recommended for HBase on Amazon S3 especially for read access patterns Prewarming the BucketCa che result s in reduce d Amazon S3 traffic for subsequent requests Disabling pre warming in read heavy workloads result s in reduced read performance and increased Amazon S3 activity To configure HBase to prefetch blocks set the following properties: hbasersprefetchblocksonopen • This property controls whether the server should asynchronously load all of the blocks when a store file is opened (data meta and index) Note that enabling this property co ntribute s to the time the Region Server takes to open a region and therefore initialize • Set this value to true to apply the property to all tables This can also be applied per CF instead of using a global setting Prefer this over enabling it cluster wide • If you set hbasersprefetchblockso nopen to true the properties that follow increase the number of threads and the size of the queue for the pre fetch operation: o Set hbasebucketcachewriterqueuelength to 1024 as a starting value The default value is 64 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 26 o Set hbasebucketcachewriterthreads to 6 as a starting value The default value is 3 o The values should be configured together and consider the instance type for the RegionServer and the number of regions per RegionServer By increasing the number of threads you may be able to choose a lower value for hbasebucketcachewriterqueuelength o Property hbasersprefetchblocksonopen will control how fast you get data from Amazon S3 during the pre fetch o Monitor HBase logs to understand how fast the bucket cache is being initialized and monitor cluster resources to see the impact of the properties on memory and CPU Iterate on these values to find the best value for your use case o For more details refer to HBASE 15240 Modifying the Table Schema to Support Pre warming Finally prefetch ing can be applied globally or per column family In addition the IN_MEMORY region of the Bucket Cache can be ap plied per column family You must change the schema of the tables to set these properties For each column family and for the access patterns you must identify which column families should always be placed in m emory and which column families that benefit from prefetching For example if one column family is never read by the HBase read path (only read by an ETL job) you can save resources on the cluster and avoid using the PREFETCH_BLOCKS_ON_OPEN Key or the IN_MEMORY for that column family To modify an existing table to use PREFETCH_BLOCKS_ON_OPEN or IN_MEMORY keys see the follow ing examples: hbase shell hbase(main):001:0> alter 'MyTable' NAME => 'myCF' PREFETCH_BLOCKS_ON_OPEN => 'true' hbase(main):002 :0> alter 'MyTable' NAME => 'myCF 2' IN_MEMORY => 'true' Tuning the Updates/Write Path This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 27 This section show s you how to tune an d size the Mem Store to avoid having frequent flushes and small HFiles As a result the frequency of compactions and Amazon S3 traffic is reduced hbaseregionserverglobalmemstoresize • This property sets the maximum size of all MemStores in a RegionS erver • The memory allocated to the MemStores is kept in the main memory of the RegionServers • If the value of hbaseregionserverglobalmemstoresize is exceeded u pdates are blocked and flushes are forced until the total size of all the MemStores in a RegionServer is at or below the value of hbaseregionserverglobalmemstore sizelowerlimit • The d efault value is 04 (40% of the heap ) • For write heavy access patterns you can increase this value to increase the heap area dedicate d to all Mem Stores • Consider the number of regions per Region Server and the number of column families with high write activity when setting this value • For read heavy access patterns this setting can be decreased to free up resources that support the read path hbasehregionmemstoreflushsize • This property sets the flush size per MemStore • Depending on the SLA of your API the flush size may need to be higher than the flush size configured on the source cluster • This setting impact s the traffic to Amazon S3 t he size of HFiles and the impact of compactions in your cluster The higher you set the value the fewer Amazon S3 operations are required and the higher the size of each resulting HFile • This value is dependent on the total number of regions per RegionS erver and the number of column families with high write activity This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 28 • Use 536870912 bytes (512 MB) as the starting value then monitor the frequency of flushes the Memstore Flush Queue Size and Application APIs response time If frequency of flushes and queue size is high increase this setting hbaseregionserverglobalmemstore sizelowerlimit • When the size of all Memstores exceeds this value flushes are forced This property prevents the Memstore from being blocked for updates • By default this property is set to 095 95% of the value set for hbaseregionserverglobalmemstoresize • This value depends on the number of Regions per RegionServer and the write activity in your cluster • You might want to d ecrease this value if as soon a s the Memstores reach this safety threshold the write activity quickly fills the missing 0 05 and the MemStore is blocked for writes • Monitor the frequency of flushes the Memstore Flush Queue Size and Application APIs response time If frequency and queue size is high increase this setting hbasehregionmemstoreblockmultiplier • This property is a safety threshold and controls spikes in write traffic • Specifically this property sets the threshold at which writes are blocked If the MemStore reaches hbasehregionmemstoreblockmultiplier times hbasehregionmemstoreflushsize bytes writes are blocked • In case of spikes in traffic this property prevent s the Memstore from continu ing to grow and in this way prevent s HFiles with large sizes • The default value is 4 • If you r traffic has a constant pattern consider keep ing the default value for this property and tune only hbasehregionmemstoreflushsize This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 29 hbasehregionpercolumnfamilyflushsizelowerbound min • For the tables that have multiple column families this property force s HBase to flush only the Mem Stores of each column family that exceed hbasehregionpercolumnfamilyflushsizelowerbound m in • The default value for this property is 16777216 bytes • This settin g impact s the traffic to Amazon S3 A higher value reduce s traffic to Amazon S3 • For write heavy access patterns with multiple column families this property should be changed to a value higher than the default of 16777216 bytes but less than half of the value of hbasehregionmemstoreflushsize hfileblockcachesize • This property sets the percentage of the heap to be allocated to the BlockCa che • Use the default value of 04 for this property • By default the BucketCache store s data blocks and the Block Cache store s bloom filters and indices • You will need to allocate enough of the heap to cache indices and bloom filters if applicable To measure HFile indices and bloom filters sizes access the web UI of one of the RegionServers • Iterate on this value to find the best value for your use case hbasehstoreflushercount • This property controls the number of threads availa ble to flush writes from memory to Amazon S3 • The default value is 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 30 • This settin g impact s the traffic to Amazon S3 A higher value reduce s the Mem Store flush queue and speed s up writes to Amazon S3 This setting is valuable for write heavy environments The value is dependent on the instance type used by the cluster • Test the value and gradually increase it to 10 • Monitor the MemStore flush queue size the 99th percentile for flush time and application API response times Iterate on this value to find the best value for your use case Note : Small clusters with high region density and high write activity should also tune HDFS properties that allow the HDFS NameNode and the HDFS DataNode to scale Specifically properties dfsdatanodehandlercount and dfsnamenodehandlercount should be increased to at least 3x their default value of 10 Region size considerations Since this process is a migration s et the region size to the same region size on your HDFS backed cluster As a reference on HBase o n Amazon S3 customer regions fall into these categories: small sized regions ( 110 GB ) mid sized regions ( 1050 GB ) and large sized regions ( 50100 GB ) Controlling the Size of Regions and Region Splits This property sets the size of the regions in your cluster This property should be configured together with the property hbaseregionserverregionsplitpolicy which is not covered on this whitepaper • Use your current cluster’s value for hbasehregionmaxfil esize o As a starting point you can use the value in your HDFS backed cluster • Set hbaseregionserverregionsplitpolicy to the same policy in your HDFS backed cluster o This property controls when a region should be split This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 31 o The default value is orgapacheh adoophbaseregionserverSteppingSplit Policy • Set hbaseregionserverregionSplitLimit to the same value in your HDFS backed cluster o This property acts as a guideline/limit for the RegionServer to stop splitting o Its default value is 1000 Tuning HBase Compactions This section show s you how to configure properties that control major compactions reduce the frequency of minor compactions and control the size of HFiles to reduce Amazon S3 traffic Controlling Major Compactions In production environments we recommend you disable major compaction However there should always be a process to run major compactions Some customers opt to have an application that incrementally runs major compactions in the background in a table or Region Server basis Set hbasehregionmajorcompaction to 0 to disable automatically scheduled major compactions Reduce the frequency of minor compactions and control the size of HFiles to reduce Amazon S3 traffic The following properties are dependent on the Mem Store size flush size and the need to control the frequency of minor compactions The properties that follow should be set according to response time needs and average size of generated Store Files during a Mem Store flush To control the behavior of minor compactions tune these properties : hbasehstorecompactionminsiz e • If a StoreFile is smaller than the value set by this p roperty the StoreFile will be selected for compaction If Store Files have a size equal or larger This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 32 than the value of hbasehstorecompactionminsize hbasehstorecompactionratio is used to determine if the files are eligible for compaction • By default this value is set to 134217728 bytes • This setting depends on the fre quency of flushes the size of Store Files generated by flushes and hbasehregionmemstoreflushsize • This settin g impact s the traffic to Amazon S3 T he higher you set the value the more frequent minor compactions will occur and therefore trigger Amazon S3 activity • For write heavy environments with many small Store Files you may want to decrease this value to reduce the frequency of minor compactions and therefore Amazon S3 activity • Monitor the frequency of compactions the overall StoreFile size and iterate on this value to find the best value for your use case hbasehstorecompactionmaxsize • If a StoreFile is larger than the value set by this property the Stor eFile is not selected for compaction • This value setting depend s on the size of the HFiles generated by flushes and the frequency of minor compactions • If you increase this value you will have fewer la rger StoreFiles and increased Amazon S3 activity If you decrease this value you will have more smaller StoreFiles and reduced Amazon S3 activity • Monitor the frequency of compactions the compaction output size the overall StoreFile size and iterate on this value hbasehstorecompaction ratio Accept the default of 10 as a starting value for this property For more details on this property refer to hbase defaultxml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 33 hbasehstorecompactionThreshold • If a store reaches hbasehstorecompactionThreshold a compaction is run to re write the Stor eFiles into one • A high value will result in less frequent minor compactions larger Store Files longer minor compactions and less Amazon S3 activity • The default value is 3 • Start with a value of 6 m onitor Compaction Frequency the average size of StoreFi les compaction output size compaction time and iterate on this value to get the best value for your use case hbasehstoreblockingStoreFiles • This property sets the total number of StoreFiles a sing le store can have before updates are blocked for this region If this value is exceeded updates are blocked until a compaction concludes or hbasehstoreblockingWaitTime is exceeded • For write heavy workloads use two to three times the default value as a starting value • The default value is 16 • Monitor the frequency of flushes blocked request s count and compaction time to set the proper value for this property Minor and m ajor compactions will flush the Bucket Cache For more details refer to HBASE 1597 Control ling the storage footprint locally and on Amazon S3 At a high level on H Base on Amazon S3 WALs are stored on HDFS When a compaction occurs previous HFiles ar e moved to the archive and only deleted if they are not associated with a snapshot HBase relies on a Cleaner Chore that is responsible for deleting unnecessary HFiles and expired WALs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 34 Ensuring the Cleaner Chore is always running With HBase 146 (Amazon EMR version 5170 and later ) we recommend that you deploy the cluster with th e cleaner enabled This is the default behavior The property that sets this behavior is hbasemastercleanerinterval We recommend that you use the l atest Amazon EMR release For versions prior to Amazon EMR 5170 see the Operational Considerations section for the HBase shell commands t hat control the cleaner behavior To e nable the cleaner globally set the hbasemastercleanerinterval to 1 Speeding up the Cleaner Chore HBASE 20555 HBASE 20352 and HBASE 17215 add additional control to the Cleaner C hore that deletes expired WALs (HLogCleaner) and expired archived HFiles (HFileCleaner) These controls are available on HBase 146 (Amazon EMR version 5170) and later The number of threads allocated to the preceding properties should be configured together and take into consideration the capacity and instance type of the Amazon EMR Master node hbaseregionserverhfilecleanerlargethreadcount • This property sets the number of threads allocated to clean expired large HFiles • hbaseregionserverthreadhfilecleanerthrottle sets the size that distinguishes between a small and large file The default value is 64 MB • The value for this property is dependent on the number of flushes write activity in the cluster and snapshot deletion frequency • The higher the write and snapshot deletion activity the higher the value should be • By default this property is set to 1 • Monitor the size of the HBase root directory on Amazon S3 and iterate on this value to find the best value for your use case Conside r the Amazon EMR Master CPU resources and the valu es set for the other This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 35 configuration properties identified in this section For more information see the Enabling Amazon S3 metrics for the HBase on Amazon S3 root directory section hbaseregionserverhfilecleanersma llthreadcount • This property sets the number of threads allocated to clean expired small HFiles • The value for this property is dependent on the number of flushes write activity in the cluster and snapshot deletion frequency • By default this property is set to 1 • The higher the write and snapshot deletion activity the higher the value should be • Monitor the size of the HBase root directory on Amazon S3 and iterate on this value to find the best value for your use case Consider the Amaz on EMR Master CPU resources and the values set for the other configuration properties identified in this section hbasecleanerscandirconcurrentsize • This property sets the number of threads t o scan the oldWALs directories • The value for this property i s dependent on the write activity in the cluster • By default this property is set to ¼ of all available cores • Monitor HDFS use and i terate on this value to find the best value for your use case Consideration the Amazon EMR Master CPU resources and the values set for the other configuration properties identified in this section hbaseoldwalscleanerthreadsize This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 36 • This property sets the number of threads to clean the WALs under the oldWALs directory • The value for this property is dependent on the write activity in the cluster • By default this property is set to 2 • Monitor HDFS use and iterate on this value to find the best value for your use case Consider the Amazon EMR Master CPU resources and the values set for the other configuration propertie s identified in this section For more details on how to set the configuration properties to clean expired WALs refer to HBASE 20352 Porting existing settings to H Base on Amazon S3 Some pr operties you have tuned in your on premises cluster but were not included in the Apache HBase tuning section These configurations include the heap size for HBase and supporting Apache Hadoop component s Apache HBase Split Policy Apache Zookeeper timeouts and so on For these configuration properties use the value in your HDFS backed cluster as a starting point F ollow the same proc ess to iterate and determine the best value that supports your use case EMRFS Configuration Properties Starting December 1 2020 Amazon S3 deliver s strong read after write consistency automatically for all applications Therefore there is no need to enable EMRFS consistent view and other consistent view related configurations as detailed in Configure Consistent View in the Amazon EMR Management Guide For more details on Amazon S3 strong read after write consistency see Amazon S3 now delivers strong read after write consistency automatically for all applications Setting the total number of connections used by EMRFS to read/write data from/to Amazon S3 With HB ase on Amazon S3 access to data is done via EMRFS This means that tasks such as an Apache HBase Region opening MemStore flushes and compactions all will initiate a request to Amazon S3 To support workloads for a large number of regions and datasets you must tune the total number of connection s to Amazon S3 that EMRFS can make (fss3maxConnections ) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 37 To tune fss3maxConnections account for the average si ze of the HFiles number of regions the frequency of minor compactions and the overall read and write throughput the cluster is experiencing fss3maxConnections • The default value for HBase on Amazon S3 is 10 000 This value should be set to more than 10000 for clusters with a large number of regions (2500+) large datasets (1 TB +) high minor compactions activity and intense read/write activity • Monitor the logs for the ERROR message “Unable to execute HTTP request: Timeout waiting for connection” and iterate on this value See more details a bout this error message in the Troubleshooting section • Several customers at the +50TB /20k regions scale set this property to 50000 Testing Apache HBase and EMRFS Configuration Values Options to approach performance testing During the testing phas e we recommend that you use the metrics for the relevant HBase sub components together with the overall response times of your application to gauge the impact of the changes made to HBase properties We also recommend that you start by testing the HBase configuration settings that contribute to a healthy cluster state at your dataset scale (fast initialization times balanced cluster and so on ) and then focus on testing the configuration property values for the read and write/ update paths We provide guidelines on how to size the cluster compute and local storage resources The R5/ R5d instance type s are good candidate s for a starting point as they are memory optimized instances As you focus on tuning the read and write/ update paths we recommend you iterate on the number of r egions per RegionServer (cluster size) As a starting value you can use the same region density as in your HDFS backed cluster and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 38 iterate according to the behavior indicated by the metrics for the RegionServers resources and HBase read/ write p ath compone nts For more details see Sizing Compute Capacity Selecting an Instance Type Also consider costs while you iterate on instance size and type Refer to the AWS Simple Monthly Calculator to quickly help you estimate costs for the different clusters of your test environment To test the HBase configuration values you have selected as a starting value use one of the following options Traffic Segmentation If the use case permits and the application traffic can be segmented by API/Table consider creating empty tables prepart itioned with the same number of regions as the original and then have the test cluster receive 10 50% of the production traffic Although this won’t be an accurate representation of the production load you will be able to iterate faster on the configurati ons for most HBase components This way as soon as the HBase configuration values have been identified for the smaller cluster/set up you can deploy a new cluster gradually increase the traffic load and iter ate again on the configurations Dataset Segm entation Dataset segmentation is especially relevant for datasets on the terabyte and petabyte scale If you choose this option and the use case permits we recommend that you use between 10% to 30% of the overall dataset and iterate to find the HBase con figuration values that contribute to a stable cluster and good response time for your application’s APIs Alternatively you can focus on a few tables at first As soon as you are satisfied with the performance with a subset of the dataset or some of the t ables you can deploy to a new cluster pointing to the full data set and iterate again on the configurations We provide steps on how to migrate and restore the full datasets in the next section For both options when you have identified a set of HBase p roperties that can be adjusted to improve stability and performance you can apply the configurations to each node of the cluster with a script and then restart HBase For more details on the steps to restart HBase see the Rolling Restart section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 39 When you are satisfied with the cluster behavior and application response times with segmented traffic and dataset you can also iterate on the instance size and instance type for both the Amazon EMR Master and Amazon EMR Core /Task Nodes When you are ready to do so you can terminate the test cluster update the Amazon EMR Configuration Settings and deploy a new cluster See the Cluster termination without data loss section to follow the correct cluster termination procedure Finally when you are ready to test with the full production traffic a nd full production dataset size the cluster accordingly using the metrics for the previous tests as a reference Then migra te the data and redeploy a new Amazon EMR Cluster Preparing the Test Environment Preparing your AWS account for performance testing To identify the optimal configuration of your HBase on Amazon S3 cluster you will need to iterate on several configurati on values during a testing stage Not only will you make changes to HBase configurations but also to the type and family of the cluster's Amazon EC2 instances To avoid any impact on existing workloads on the account used for testing or production we rec ommend that you raise the limits identified in this section according to your testing or production account needs Increasing Amazon EC2 and Amazon EBS Limits To avoid any delays during performance tests raise the following limits in your AWS account since you may need to deploy several clusters at the same time (replicas clusters pointing to different HBase root directories and so on ) If your cluster size is small the default values may be sufficient For more details about the current limits applied into your account refer to Trusted Advisor (Login Required) If your cluster is expected to have more than 100 instances open an AWS Support Case (Login Required) to have the following Amazon EC2 and Amazon EBS limits increased: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 40 • R5/R5d family : increase the limit to 200% of your clusters estimated size for xl 2xl and 4xl • Total volume storage of General Purpose SSD (gp2) volumes : increase the limit with additional capacity (4x the total dataset size) For example: if dataset is 40 TB the SSD available ( instance store or Amazon EBS Volume s) must be at least 40 TB Account for additional storage because you may need to deploy several clusters at the same time (replicas clusters pointing to different Apache HBase root directories) See the Sizing Loc al Storage section for more details Increasing AWS KMS limits Amazon S3 encryption works with EMR FS objects read from and written to Amazon S3 If you do not have a security requirement for data at rest then you can skip this section If your cluster is small the default values may be sufficient For additional details about AWS KMS limits refer to Requests per second limit for each AWS KMS A PI operation Preparing Amazon S3 for your HBase workload Amazon S3 can scale to support very high request rates to support your HBase on Amazon S3 cluster It’s valuable to understand the exact performance characteristics of your HBase workloads when migrating to a new storage layer especially when moving to an object store such as Amazon S3 Amazon S3 automatically scales to high request rates and currently supports up to 3500 PUT/POST/DELETE requests per second and 5500 GET requests per second per p refix in a bucket If your request rate grows steadily Amazon S3 automatically scal es beyond these rates as needed If you expect the request rate per prefix to be higher than the preceding request rate or if you expect the request rate to rapidly increa se instead of gradually increase the Amazon S3 bucket must be prepared to support the workloads of your HBase on Amazon S3 Cluster For more details on how to prepare the Amazon S3 bucket see the Preparing Amazon S3 for production load section This help s minimize throttling on Amazon S3 To understand how you can recognize that Amazon S3 is throttling the r equests from your cluster see the Troubleshooting section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 41 Enabling Amazon S3 metrics for the HBase on Amazon S3 root directory The Amazon CloudWatch request metrics for Amazon S3 enable the collection of Amazon S3 API metrics for a specific bucket These metrics provide a good understanding of the TPS driven by your HBase cluster and they can be helpful to identify any operational issues Note: Amazon CloudWatch metrics incur a cost For more information refer to How Do I Config ure Request Metrics for an S3 Bucket? and Monitoring Metrics with Amazon CloudWatch Enabling Amazon S3 lifecycle rules to end and clean up incomplete multipart upl oads HBase on Amazon S3 via EMRFS uses Amazon S3 Multipart API The Multipart upload API enables EMRFS to upload large objects in parts For more details on the Multipart API refer to Multipart Upload Overview Note: After you initiate a multipart upload and upload one or more parts you must either complete or abort the multipart upload to stop storage charges of the uploaded parts Only after you either complete or abort a multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage Amazon S3 provides a lifecycle rule that when configured automatically remove s incomplete multipart uploads For complete steps on how to creat e a Bucket Lifecycle Policy and apply it to the HBase root directory bucket refer to Aborting Incomplete Multipart Uploads Usin g a Bucket Lifecycle Policy Alternatively you can use the AWS Console and configure the Lifecycle policy For more details refer to Amazon S3 Lifecycle Management Update – Support for Multipart Uploads and Delete Markers We recommend that you configure the lifecycle policy to end and clean up incomplete multipart uploads after 3 days This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 42 Amazon EMR Cluster Setup Selecting an Am azon EMR Release We strongly recommended that you use the latest release of Amazon EMR when possible Refer to Amazon EMR 5x Release Versions to find the software vers ions available at the latest Amazon EMR release For more details refer to Migrating from Previous HBase Versions We also recommend that you deploy the cluster wi th only the required applications This is especially important in production so you can properly use the full resources of the cluster Applying HBase and EMRFS Configurations to the Cluster Amazon EMR allows the configuration of applications by supplyin g a JSON object with any changes to default values For more information refer to Configuring Applications Applying HBase configurations This section includes guidelines on how to construct the JSON object that can be supplied to the cluster during cluster deployment Most of these properties are configured on the hbasesitexml file Other settings of HBase such as Region and Master server heap size and logging settings have their ow n configuration file and their own classification when setting up the JSON object For an example JSON object to configure the properties written to hbase sitexml see Configure HBase In addition to hbasesite classification you may need to use classification hbaselog4j to change values in HBase's hbaselog4jproperties file and classification hbaseenv to change values in HBase ’s environment Configuring HBase to expose metrics via JMX An example JSON object to configure HBase to expose metrics via JMX can be found below [ { This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 43 "Classification": "hbase env" "Properties": { } "Configurations": [ { "Classification": "export" "Properties": { "HBASE_REGIONSERVER_OPTS": " Dcomsunmanagementjmxremotessl=false Dcomsunmanagementjmxremoteauthenticate=false Dcomsunmanagementjmxremotepor t=10102" "HBASE_MASTER_OPTS": “ Dcomsunmanagementjmxremotessl=false Dcomsunmanagementjmxremoteauthenticate=false Dcomsunmanagementjmxremoteport=10101" } "Configurations": [ ] } ] } ] Configuring t he Log Level for HBase { "Classification": "hbase log4j" "Properties": { "log4jloggerorgapachehadoophbase": "DEBUG" } } Applying EMRFS configurations { "Classification": " emrfssite" "Properties": { "fss3maxConnections ": "10000" } } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 44 Sizing the cluster compute and local storage resources Sizing Compute Capacity Selecting an Instance Type When sizing your cluster you can consider having a large cluster with a smaller instance type or having a small cluster with a more powerful instance type We recommend extensive testing to find the correct instance type for your application SLA As a starting point you can use the latest generation of memory optimized instan ce types (R5/R5d) and the same region density per RegionServer as in your HDFS backed cluster R5d instances share the same spec ifications as R5 instances and also include up to 36TB of local NVMe storage For more details on these instance types refer to Now Available: R5 R5d and z1d Instances As you progress to tune the read and write path first establish a configuration that supports the SLA of your application Then increase the region density by redu cing the number of nodes in the cluster Sizing Local Storage The disk requirements of the cluster depend on your application SLA and access patterns As a rule of thumb read intensive applications be nefit from caching data on the BucketCache For this reason the disk size should be large enough to cover all caching requirements HDFS requirements (write path) and OS and Apache Hadoop requirements Storage options on Amazon EMR On Amazon EMR you ha ve the option to choose an Amazon EBS volume or the instance store The AMI used by your cluster dictate s whether the root device volume uses the instance store or an Amazon EBS volume Some AMIs use Amazon EC2 instance store and some use Amazon EBS When you configure instance types in Amazon EMR you can add Amazon EBS volumes which contribute to the total capacity together with instance store (if present) and the default Amazon EBS volume Amazon EBS provides the following volume types: General Purpose (SSD) Provisioned IOPS (SSD) Throughput Optimized (HDD) Cold (HDD) and Magnetic They differ in performance characteristics and price to support multiple analytic and business needs For a detailed description of storage options on Amazon EMR refer t o Instance Store and Amazon EBS Selecting and Sizing Local Storage for the BucketCache Most HBase workloads perform well with General Purpose volumes (GP2) volume s The volume mix per Amazon EMR Core instances can be either two or more large volumes or multiple small volumes in addition to the root volume This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 45 Note that when your instance has multiple volumes the BucketCache is divided across n1 volumes The first volume store s logs and temporary data See the Tuning the Size of the BucketCache section for details on how to choose a starting value for the size of the BucketCache and the stark disk requirement s for your Amazon EMR Core /Task nodes Applying Security Configurations to Amazon EMR and EMRFS You can use Security Configurations to apply the configurations that support at rest data encryption in transit data encryption and authentication For more details see Create a Security Configuration Depending on the strategy you cho ose for authorizing access to HBase HBase configurations can be ap plied via the same process included in the Applying HBase and EMRFS Configurations to the Cluster Due to performance issues reported when Block encryption is using 3DES Transparent Encryption is preferred over encry pting block data transfer For more details on Trans parent Encryption see the Transparent Encryption Reference section Troubleshooting Error message excerpt Description/Solution Please reduce your request rate (Service: Amazon S3; Status Code: 503; Error Code: SlowDown …) Amazon S3 is throttling requests from your cluster due to an excessive number of transactions per second to specific object prefixes Find the request rate and p repare the Amazon S3 bucket for that request rate Use the metrics for the Amazon S3 bucket location for the HBase root directory to review the number of requests for the previous hour (request rate) See the Prepa ring Amazon S3 for your HBase workload and Preparing Amazon S3 for Production load sections for details on how to prepare the Amazon S3 bucket location for the HBase root directory for your request rate Unable to e xecute HTTP request: Timeout waiting for connection from pool Increase t he value of the fss3maxConnections property See the Setting the total number of connections used by EMRFS to read/write data from/to Amazon S3 section for more details on how to tune this property This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 46 Migrating and Restoring Apache HBase Tables on Apache HBase on Amazon S3 Data Migration This paper covers using the ExportSnapshot tool to migrate the data For additional options see Tips for Migrating to Apache HBase on Amazon S3 from HDFS Creating a Snapshot To creat e a snapshot perform the following commands from the HBase shell: hbase shell hbase(main):001 :0>disable 'table_name' hbase(main):002 :0>snapshot 'table_name' 'table_name_snapshot_date' hbase(main):003 :0>enable 'table_name' If you are taking the snapshot f rom a production HBase cluster and cannot afford service disruption you do not need to disable the table to take a snapshot There is minimal performance degradation if you keep the table active However there may be some inconsistencies between the state of the table at the end of the snapshot operation and the snapshot contents If you can afford service disruption in your production HBase cluster disabling the table guarantee s that the snapshot is fully consistent with the state of the disabled table Validating the Snapshot As soon as the snapshot is concluded use the following command to check that the snapshot was successful hbase orgapachehadoophbasesnapshotSnapshotInfo stats snapshot table_name_snaps hot_date Snapshot Info Name: table_name_snapshot_date This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 47 Type: FLUSH Table: table_name Format: 2 Created: 2018 0329T16:02:06 Owner: 10 HFiles (0 in archive) total size 488 K (10000% 488 K shared with the source table) 0 Logs total size 0 Exporting a Snapshot to Amazon S3 Next use orgapachehadoop HBasesnapshotExportSnapshot to copy the data over to the Apache HBase root directory on Amazon S3 hbase orgapachehadoophbasesnapshotExportSnapshot snapshot <snapshot_name> copyto s3://< HBase_on_S3_root_dir>/ As an example the export of 40 TB of data with 4x10GB Direct Connect takes approximately four to five hours Data Restore Creating an empty table If you are restoring data from a snapshot first create an empty table and then issue a snapshot restore instead of a snapshot clone A snapshot clone (clone_snapshot ) produces an actu al copy of the files A snapshot restore (restore_snapshot ) create s links to the files copied to the Amazon S3 root directory hbase shell hbase(main):001:0> create ‘table name’’cf1’ hbase(main):002 :0> disable ‘table name’ Restoring the snapshot from the HBase shell After creating an empty table you can r estore the snapshot This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 48 hbase(main):004 :0> restore_snapshot ‘table namesnapshot’ hbase(main):005 :0> enable ‘table name’ Deploying into Production After you complete the steps in this section you are ready to migrate the full dataset from your HDFS backed cluster to HBase on Amazon S3 and restore it to an HBase on Amazon S3 cluster running in your AWS production account Preparing Amazon S3 for Production load Analyze the Amazon CloudWatch metrics for Amaz on S3 captured for the HBase root directory in the development account and confirm the number of requests per Amazon S3 API as noted in the Preparing the Test Environment section If you expect a rapid in crease in the request rate for the HBase on Amazon S3 root directory bucket in the production account to more than the rates in the Preparing the Test Environment section open a support case to prepare for the worklo ad and to avoid any temporary limits on your request rate You do not need to open a support case for r equest rates lower than those in the Preparing the Test Environment section Preparing the Production environment Follow all the steps in the Preparing the Test Environment to prepare your Production Environment with the configuration settings you have found during the testing phase To migrate and restore the full da taset into the production environment follow the steps in the Migrating and Restoring HBase Tables on HBase on Amazon S3 section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 49 Managing the Production Environment Operation alization tasks Node Decommissioning When a node is gracefully decommissioned by the YARN Resource Manager (during a user initiated shrink operation or node failures such as bad disk) the regions are first clo sed and then shut down by the RegionS erver You can also gracefully decommission a RegionServer on any active node by stopping the daemon manually This step may be required while troubleshooting a particular RegionServer in the cluster sudo stop hbaseregionserver During shutdown the RegionServer’s Znode expire s The HMaster notices this event and consider s that RegionServer as a crashed server The HMaster then reassign s the regions the RegionServer used to serve to other online RegionServers Depending on the prefetch settings the RegionServer warms the cache on the new RegionServer that is now assigned to serve the region Rolling Restart A rolling restart restart s HMaster process on the master node and HRegionServer process on all the core nodes Check for any inconsistencies and make sure that the HBase balancer is turned off so that the load balancer does not interfere with region deployments Use the shell to disable HBase balancer : hbase(main):001:0> balance_switch false true 0 row(s) in 02970 seconds The f ollowing is a sample script that performs a rolling restart on an Apache HBase cluster This script should be executed on the Amazon EMR Master node that has the Amazon EC2 Key Pair (pem extension) file to log in to the Amazon EMR C ore nodes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 50 #!/bin/bash sudo stop hbase master; sudo start hbase master for node in $(yarn node list | grep i ip | cut f2 d: | cut f2 d'G' | xargs) ; do ssh i ~/hadooppem t o "StrictHostKeyChecking no" hadoop@$node "sudo stop hbase regionserver;sudo start hbase regionserver" done sudo stop hbase master; sudo start hbase master #Restart HMaster again to clea r out dead servers list and reenable the balancer hbase hbck #Run hbck utility to make sure HBase is consistent Cluster resize Nodes can be added or removed from the HBase clusters on Amazon S3 by performing a resize operation on the cluster If an AutoScaling policy was set based on a specific CloudWatch metric (such as IsIdle) the resize operation happen s based on that policy All these operations are performed gracefully Backup and Resto re With HBase on Amazon S3 you can still consider taking snapshots of your tables every few hours (and deleting them after some days) so you have a point in time recovery option available to you See also the Runni ng the balancer for specific periods to minimize the impact of region movements on snapshots section Cluster termination without data loss If you want to terminate the current cluster and build a new one on the same Amazon S3 root directory we recommend that you disable all of the tables in the current cluster This ensures that all of the data that have not been written to Amazon S3 yet are flushed from MemStore cache to Amazon S3 in the form of new store files To do so the script below uses an existi ng script (/usr/lib/hbase/bin/disable_all_tablessh ) to disable the tables #!/bin/bash clusterID=$(cat /mnt/var/lib/info/job flowjson | jq r "jobFlowId") #call disable_all_tablessh bash /usr/lib/hbase/bin/disable_all_tablessh #Store the output of "l ist" command in a temp file echo "list" | hbase shell > tableListSummarytxt This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 51 #fetch only the list of tables and store it in an another temp file tail 1 tableListSummarytxt | tr '' ' \n' | tr d '"' | tr d [ | tr d ] | tr d ' ' > tableListtxt #prepare for iteration while true; do while read line; do flag="N" echo "is_enabled '$line'" | hbase shell > booltxt bool=$(tail 3 booltxt | head 1) if [ "$bool" = "true" ]; then flag="Y" break fi done < tableListtxt echo "flag: "$flag if [ "$flag" = "N" ]; then aws emr terminate clusters clusterids $clusterID break else echo "Tables aren't disabled yet Sleeping for 5 seconds to try again" fi sleep 5 done #cleanup temporary files rm tableListSummarytxt tableListtxt booltxt The preceding script can be place on a file and named disable_and_terminatesh Note tha t the script does not exist on the instance You can add an Amazon EMR step to first copy the script to the instance and then run the step to disable and terminate the cluster To run the script you can use the following Amazon EMR Step properties Name="Disable all tables"Jar="command runnerjar"Args=["/bin/bash""/home/hadoop/disable_and_terminat esh"] This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 52 OS and Apache HBase patching Similar to AMI upgrades on Amazon EC2 the Amazon EMR service team plans for application upgrades with every new Amazon EMR version release This removes any OS and Apache HBase patching activities from your team The latest version of Amazon EMR (51 70 as of this paper ) runs Apache HBase version 146 Details of each Amazon EMR version release can be found on Amazon EMR 5x Release Versions Conclusion This paper includes steps to help you migrate from HBase on HDFS to HBase on Amazon S3 The migration plan provided detailed steps and HBase properties to configure when migrating to HBase on Amazon S3 Using the various best practices and recommendations highlighted in this whitepaper we encourage you to test several values for HBase configuration properties so your HBase on Amazon S3 cluster supports the performance requirements of your application and use case Contributors The following individuals contributed to th e first version of this document: • Francisco Oliveira Senior Big Data Consultant Amazon Web Services • Tony Nguyen Senior Big Data Consultant Amazon Web Services • Veena Vasudevan Big Data Support Engineer Amazon Web Services Further Reading For additional information see the following: • HBase on Amazon S3 Documentation • Tips for Mig rating to Apache HBase on Amazon S3 from HDFS • Low Latency Access on Trillions of Records: FINRA’s Architecture Using Apache HBase on Amazon EMR with Amazon S3 • Setting up Read Replica Clusters with HBase on Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 53 • Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory Document Revisions Date Description May 2021 Revie wed for technical accuracy January 2021 Removed information addressing EMRFS Consistent View because Amazon S3 now delivers strong read afterwrite consistency automatically for all applications October 2018 First publication This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 54 Appendix A: Command Reference Restart HBase Commands to run on the master: sudo stop hbase master sudo stop hbase rest sudo stop hbase thrift sudo stop zookeeper server sudo start hbase master sudo start hbase rest sudo start hbase thrift sudo start zookeeper server Commands to run in all core nodes sudo stop hbase regionserver sudo start hbase regionserver This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 55 Appendix B: AWS IAM Policy Reference The policies that follow are annotated with comments remove the comments prior to use Minimal Amazon EMR Service Role Policy { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Resource": "*" "Action": [ "ec2:AuthorizeSecurityGroupEgress" "ec2:AuthorizeSe curityGroupIngress" "ec2:CancelSpotInstanceRequests" "ec2:CreateNetworkInterface" "ec2:CreateSecurityGroup" "ec2:CreateTags" "ec2:DeleteNetworkInterface" // This is only needed if you are launching clusters in a private subnet "ec2:DeleteTags" "ec2:DeleteSecurityGroup" // This is only needed if you are using Amazon managed security groups for private subnets You can omit this action if you are using custom security groups "ec2:DescribeAvailabilityZones" "ec2:DescribeAccountAttributes" "ec2:DescribeDhcpOptions" "ec2:DescribeImages" "ec2:DescribeInstanceSt atus" "ec2:DescribeInstances" "ec2:DescribeKeyPairs" "ec2:DescribeNetworkAcls" "ec2:DescribeNetworkInterfaces" "ec2:DescribePrefixLists" "ec2:DescribeRout eTables" "ec2:DescribeSecurityGroups" "ec2:DescribeSpotInstanceRequests" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 56 "ec2:DescribeSpotPriceHistory" "ec2:DescribeSubnets" "ec2:DescribeTags" "ec2:Desc ribeVpcAttribute" "ec2:DescribeVpcEndpoints" "ec2:DescribeVpcEndpointServices" "ec2:DescribeVpcs" "ec2:DetachNetworkInterface" "ec2:ModifyImageAttribute" "ec2:ModifyInstanceAttribute" "ec2:RequestSpotInstances" "ec2:RevokeSecurityGroupEgress" "ec2:RunInstances" "ec2:TerminateInstances" "ec2:DeleteVolume" "ec2:DescribeVolumeStatus" "ec2:DescribeVolumes" "ec2:DetachVolume" "iam:GetRole" "iam:GetRolePolicy" "iam:ListInstanceProfiles" "iam:ListRolePolicies" "s3:CreateBucket" "sdb:BatchPutAttributes" "sdb:Select" "cloudwatch:PutMetricAlarm" "cloudwatch:DescribeAlarms" "cloudwatch:DeleteAlarms" "application autoscaling:RegisterScalableTarget" "application autoscaling:DeregisterScalableTarget" "application autoscaling:PutScalingPolicy" "application autoscaling:DeleteScalingPolicy" "application autoscaling:Describe*" ] } { "Effect": "Allow" "Resource": ["arn:aws:s3:::examplebucket/*""arn:aws:s3:::examplebucket2/*"] // Here you This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 57 can specify the list of buckets which are going to be storing cluster logs bootstrap action script custom JAR files input & output paths for EMR steps "Action": [ "s3:GetBucketLocation" "s3:GetBucketCORS" "s3:GetObjectVersionForReplication" "s3:GetObject" "s3:GetBucketTagging" "s3:GetObjectVersion" "s3:GetObjectTagging" "s3:ListMultipartUploadParts" "s3:ListBucketByTags" "s3:ListBucket" "s3:ListObjects" "s3:ListBucketMultipartUploads" ] } { "Effect": "Allow" "Resource": "arn:aws:sqs:*:123456789012:AWS ElasticMapReduce *" // This will allow EMR to only perform actions (Creating queue receiving messages deleting queue etc) on SQS queues whose names are prefixed with the literal string AWS ElasticMapReduce "Action": [ "sqs:CreateQueue" "sqs:DeleteQu eue" "sqs:DeleteMessage" "sqs:DeleteMessageBatch" "sqs:GetQueueAttributes" "sqs:GetQueueUrl" "sqs:PurgeQueue" "sqs:ReceiveMessage" ] } { "Effect": "Allow" "Action": "iam:CreateServiceLinkedRole" // EMR needs permissions to create this service linked role for launching EC2 spot instances "Resource": "arn:aws:iam::*:role/aws service This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 58 role/spotamazonawscom/AWSServiceRoleForEC2Spot*" "Condition": { "StringLike": { "iam:AWSServiceName": "spotamazonawscom" } } } { "Effect": "Allow" "Action": "iam:PassRole" // We are passing the custom EC2 instance profile (defined below) which has bare minimum permissions "Resource": [ "arn:aws:iam::*:role/Custom_EMR_EC2_role" "arn:aws:iam::*:role/ EMR_AutoScaling_DefaultRole" ] } ] } Minimal Amazon EMR Role for Amazon EC2 (Instance Profile) Policy { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Resource": "*" "Action": [ "ec2:Describe*" "elasticmapreduce:Describe*" "elasticmapreduce:ListBootstrapActions" "elasticmapreduce:ListClusters" "elasticmapreduce:ListInstanceGroups" "elasticmapreduce:ListInstances" "elasticmapreduce:ListSteps" ] } { This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 59 "Effect": "Allow" "Resource": [ // Here you can specify the list of buckets which are going to be ac cessed by applications (Spark Hive etc) running on the nodes of the cluster "arn:aws:s3:::examplebucket1/*" "arn:aws:s3:::examplebucket1*" "arn:aws:s3:::examplebucket2/*" "arn:aws:s3:::ex amplebucket2*" ] "Action": [ "s3:GetBucketLocation" "s3:GetBucketCORS" "s3:GetObjectVersionForReplication" "s3:GetObject" "s3:GetBucketTagging" "s3:GetObjectVersion" "s3:GetObjectTagging" "s3:ListMultipartUploadParts" "s3:ListBucketByTags" "s3:ListBucket" "s3:ListObjects" "s3:ListBu cketMultipartUploads" "s3:PutObject" "s3:PutObjectTagging" "s3:HeadBucket" "s3:DeleteObject" ] } { "Effect": "Allow" "Resource": "arn: aws:sqs:*:123456789012:AWS ElasticMapReduce *" // This will allow EMR to only perform actions (Creating queue receiving messages deleting queue etc) on SQS queues whose names are prefixed with the literal string AWS ElasticMapReduce "Action": [ "sqs:CreateQueue" "sqs:DeleteQueue" "sqs:DeleteMessage" "sqs:DeleteMessageBatch" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 60 "sqs:GetQueueAttributes" "sqs:GetQueueUrl" "sqs:PurgeQueue" "sqs:ReceiveMessage" ] } ] } Minimal Role Policy for User Launching Amazon EMR Clusters // This policy can be attached to an AWS IAM user who will be launching EMR clusters It provides minimum access to the user to launch monitor and terminate EMR clusters { "Version": "2012 1017" "Statement": [ { "Sid": "Statement1" "Effect": "Allow" "Action": "iam:CreateServiceLinkedRole" "Resource": "*" "Condition": { "StringLike": { "iam:AWSServiceName": [ "elasticmapreduceamazonawscom" "elasticmapreduceamazonawscomcn" ] } } } { "Sid": "Statement2" "Effect": "Allow" "Action": [ "iam:GetPolicyVersion" "ec2:AuthorizeSecurityGroupIngress" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 61 "ec2:Describe Instances" "ec2:RequestSpotInstances" "ec2:DeleteTags" "ec2:DescribeSpotInstanceRequests" "ec2:ModifyImageAttribute" "cloudwatch:GetMetricData" "cloudwatc h:GetMetricStatistics" "cloudwatch:ListMetrics" "ec2:DescribeVpcAttribute" "ec2:DescribeSpotPriceHistory" "ec2:DescribeAvailabilityZones" "ec2:CreateRoute" "ec2:RevokeSecurityGroupEgress" "ec2:CreateSecurityGroup" "ec2:DescribeAccountAttributes" "ec2:ModifyInstanceAttribute" "ec2:DescribeKeyPairs" "ec2:DescribeNetworkAcls" "ec2:DescribeRouteTables" "ec2:AuthorizeSecurityGroupEgress" "ec2:TerminateInstances" //This action can be scoped in similar manner like it has been done below for "elasticmapreduce:TerminateJobFlows" "iam:GetPolicy" "ec2:CreateTags" "ec2:DeleteRoute" "iam:ListRoles" "ec2:RunInstances" "ec2:DescribeSecurityGroups" "ec2:CancelSpotInstanceReq uests" "ec2:CreateVpcEndpoint" "ec2:DescribeVpcs" "ec2:DescribeSubnets" "elasticmapreduce:*" ] "Resource": "*" } { "Sid": "Statement3" "Effect": "Allow" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 62 "Action": [ "elasticmapreduce:TerminateJobFlows" ] "Resource":"*" "Condition": { "StringEquals": { "elasticmapreduce:Resour ceTag/custom_key": "custom_value" // Here you can specify the key value pair of your custom tag so that this IAM user can only delete the clusters which are appropriately tagged by the user } } } { "Sid": "Statement4" "Effect": "Allow" "Action": "iam:PassRole" "Resource": [ "arn:aws:iam::*:role/Custom_EMR_Role" "arn:aws:iam::*:role/Custom_EMR_EC2_role" "arn:aws:iam::*:role/EMR_AutoScaling_DefaultRole" ] } ] } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 63 Appendix C: Transparent Encryption Reference To configure Transparent Encryption use the following Amazon EMR Configuration JSON: [{"classification":"hdfs encryption zones""propert ies":{"/user/hbase":"hbase key"}}] In addition to the preceding classification you must disable HDFS Opensource Security By default Amazon EMR Security Configurations for a trest Encryption for Local Disks tie Open source HDFS Encryption with LUKs encry ption If you need to configure Transparent Encryption and your application is latency sensitive do not enable at rest encryption via Amazon EMR Security Configuration You can configure LUKS via a bootstrap action To check that WALs are being encrypte d use the following commands: sudo –u hdfs hdfs dfs ls /user/HBase/WAL/ip xxxxx xxec2internal160201520373175110 sudo –u hdfs hdfs crypto getFileEncryptionInfo path /user/HBase/WAL/WALs/ip xxxxx xxec2internal160201520373175110/ip xxxxx xxec2internal%2C16020%2C15203731751101520373184129 To verify that the oldWALs are being encrypted the output to the last command should be the following: {cipherSuite: {name: AES/CTR/NoPadding algorithmBlockSize: 16} cryptoProtocolVersion: CryptoProto colVersion{description='Encryption zones' version=2 unknownValue=null} edek: 7c3c2fcf8337f14bbf815697686de5a696c6670c0f41eb71678b53ee5326c33e This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Migrating to HBase on Amazon S3 Page 64 iv: eac6cf91bdd2eee8496f1ddb19b4fcf8 keyName: HBase key ezKeyVersionName: hbase key@0} Note: The default configurations grant access to the DECRYPT_EEK operation on all keys (/etc/hadoop kms/conf/kms aclsxml) For more details see Transparent Encryption in HDFS on Amazon EMR and Transparent Encryption in HDFS
|
General
|
consultant
|
Best Practices
|
Migrating_Your_Databases_to_Amazon_Aurora
|
This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/ migratingdatabasestoamazonaurora/migrating databasestoamazonaurorahtml Migratin g Your Databases to Amazon Aurora First Published June 10 2016 Updated July 28 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction to Amazon Aurora 1 Database migration considerations 3 Migration phases 3 Application consid erations 3 Sharding and read replica considerations 4 Reliability considerations 5 Cost and licensing considerations 6 Other migration considerations 6 Planning your database migration process 7 Homogeneous migration 7 Heterogeneous migration 9 Migrating large databases to Amazon Aurora 10 Partition and shard consolidation on Amazon Aurora 11 Migration options at a glance 12 RDS snapshot migration 13 Migration using Aurora Read Replica 18 Migrating the database schema 21 Homogeneous schema migration 22 Heterogeneous schema migration 23 Schema migration using the AWS Schema Conversion Tool 24 Migrating data 32 Introduction and general approach to AWS DMS 32 Migration methods 33 Migration procedure 34 Testing and cutover 43 Migration testing 44 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Cutover 44 Conclusion 46 Contributors 46 Further reading 46 Document history 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Amazon Aurora is a MySQL and PostgreSQL compatible enterprise grade relational database engine Amazon Aurora is a cloud native database that overcomes many of the limitation s of traditional relational database engines The goal of this whitepaper is to highlight best practices of migrating your existing databases to Amazon Aurora It presents migration considerations and the step bystep process of migrating open source and c ommercial databases to Amazon Aurora with minimum disruption to the applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 1 Introduction to Amazon Aurora For decades traditional relational databases have been the primary choice for data storage and persistence These datab ase systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL three times better performance than PostgreSQL and comparable performance of high end commercial databases Amazon Aurora is priced at 1/10th the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazo n RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning software patching setup configuration monitoring and backup are co mpletely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones in a Region providing out ofthebox durability and fault tolerance to your data acr oss physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon Availability Zones are isolated from each other and are connected through lowlatency links Each segment of your database volume i s replicated six times across these Availability Zones Amazon Aurora enables dynamic resizing for database storage space Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impac t—so there is no need for estimating and provisioning large amount of database storage ahead of time The storage space allocated to your Amazon Aurora database cluster will automatically increase up to a maximum size of 128 tebibytes (TiB) and will automa tically decrease when data is deleted Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage Service This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Auro ra 2 (Amazon S3 ) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Optionally Aurora Global Database can be used for high read throughputs across six Regions up to 90 read replicas Amazon Aurora is highly secure and allows you to encrypt your databases using keys that you create and control through AWS Key Management Service ( AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the automated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in tra nsit For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mi ssion critical applications Amazon Aurora Serverless v2 (Preview) is the new version of Aurora Serverless an on demand auto matic scaling configuration of Amazon Aurora that automatically starts up shuts down and scales capacity up or down based on yo ur application's needs It scales instantly from hundreds to hundreds ofthousands of transactions in a fraction of a second As it scales it adjusts capacity in fine grained increments to provide just the right amount of database resources that the appli cation needs There is no database capacity for you to manage you pay only for the capacity your application consumes and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak Aurora Serverless v2 is a simpl e and cost effective option for any customer who cannot easily allocate capacity because they have variable and infrequent workloads or have a large number of databases If you can predict your application’s requirements and prefer the cost certainty of fi xedsize instances then you may want to continue using fixed size instances Amazon Aurora capabilities discussed in this whitepaper apply to both MySQL and PostgreSQL database engine s unless otherwise specified However the migration practices discusse d in this paper are specific to Aurora MySQL database engine For more information about Aurora best practices specific to PostgreSQL database engine see Working with Amazon Aurora PostgreSQL in the Amazon Aurora user guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 3 Database migration considerations A database represents a critical component in the architecture of most applications Migrating the database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliabi lity You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migration phases Because database migrations tend to be complex we advocate taking a phased iterative approach Figure 1 — Migration phases Application considerations Evaluate Aurora features Although most applications can be architected to work with many relational database engines you should make sure that your application work s with Ama zon Aurora Amazon Aurora is designed to be wire compatible with MySQL 56 and 57 Therefore most of the code applications drivers and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain My SQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora service SSH access to database nodes is restricted which may affect your ability to install thirdparty tools or plugins on the database host Performance considerations Database per formance is a key consideration when migrating a database to a new platform Therefore many successful database migration projects start with performance evaluations of the new database platform Although the Amazon Aurora Performance Assessment paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 4 applications For more useful results test the database performance for time sensitive workloads by running your queries (or s ubset of your queries) on the new platform directly Consider these s trategies: • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for those tables This gives you a good starting point Of course testing after complete data migrati on will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercial engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduc es writes to the storage system minimiz es lock contention and eliminat es delays created by database process threads Our tests with SysBench on r 516xlarge instances show that Amazon Aurora delivers close to 800000 reads per second and 200 000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to drive a large number of concurrent queries Sharding and read replica considerations If your cu rrent database is sharded across multiple nodes you may have an opportunity to combine these shards into a single Aurora database during migration A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a signif icantly higher number of reads and writes than a standard MySQL database If your application is read/write heavy consider using Aurora read replicas for offloading readonly workload from the primary database node Doing this can improve concurrency of your primary database for write s and will improve overall read and write This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 5 performance Using read replicas can also lower your costs in a Multi AZ configuration since you may be able to use smaller insta nces for your primary instance while adding failover capabilities in your database cluster Aurora read replicas offer near zero replication lag and you can create up to 15 read replicas Reliability considerations An important consideration with database s is high availability and disaster recovery Determine the RTO ( recovery time objective) and RPO ( recovery point objective) requirements of your application With Amazon Aurora you can significantly improve both these factors Amazon Aurora reduces data base restart times to less than 60 seconds in most database crash scenarios Aurora also moves the buffer cache out of the database process and makes it available immediately at restart time In rare scenarios of hardware and Availability Zone failures re covery is automatically handled by the database platform Aurora is designed to provide you zero RPO recovery within an AWS Region which is a major improvement over on premises database systems Aurora maintains six copies of your data across three Availa bility Zones and automatically attempts to recover your database in a healthy AZ with no data loss In the unlikely event that your data is unavailable within Amazon Aurora storage you can restore from a DB snapshot or perform a point intime restore oper ation to a new instance For cross Region DR Amazon Aurora also offers a global database feature designed for globally distributed transactions applications allowing a single Amazon Aurora database to span multiple AWS Regions Aurora uses storage base d replication to replicate your data to other Regions with typical latency of less than one second and without impacting database performance This enables fast local reads with low latency in each Region and provides disaster recovery from Region wide ou tages You can promote the secondary AWS Region for read write workloads in case of an outage or disaster in less than one minute You also have the option to create an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region by using MySQL binary log (binlog) replication Each cluster can have up to five Read Replicas created this way each in a different Region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 6 Cost and licensing considerations Owning and running databases come with associated costs Before planning a database migration an analysis of the total cost of ownership (TCO) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are runni ng a commer cial database engine (Oracle SQL Server DB2 and so on ) a significant portion of your cost is database licensing Since Aurora is available at one tenth of the cost of commercial engines many applications moving to Aurora are able to significantly reduce their TCO Even if you are running on an open source engine like MySQL or Postgres with Aurora’s high performance and dual purpose read replicas you can realize meaningful savings by moving to Amazon Aurora See th e Amazon Aurora Pricing page for more information Other migration considerations Once you have considered application suitability performance TCO and reliability factors you should think about what it would take to migrate to th e new platform Estimate code change effort It is important to estimate the amount of code and schema changes that you need to perform while migrating your database to Amazon Aurora When migrating from MySQL compatible databases negligible code changes are required However when migrating from non MySQL engines you may be required to make schema and code changes The AWS Schema Conversion Tool can help to estimate that effort (see the Schema migration using th e AWS Schema Conversion Tool section in this document) Application availability during migration You have options of migrating to Amazon Aurora by taking a predictable downtime approach with your application or by taking a near zero downtime approach The approach you choose depend s on the size of your database and the availability requirements of your applications Whatever the case it’s a good idea to consider the impact of the migration process on your application and business before start ing with a database migration The next few sections explain both approaches in detail This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 7 Modify connection string during migration You need a way to point the applications to your new database One option is to modify the connection strings for all of the applications Another common option is to use DNS In this case you don’t use the actual host name of your database instance in your connection string Instead consider creating a canonical name (CNAME) record that points to the host name of your database instance Doing this allows you to change the endpoint to which your application points in a single location rather than tracking and modifying multiple connection string settings If you choose to use this pattern be sure to pay close attention to the time to live (TTL) setting for your CNAME record If this value is set too high then the host name pointed to by this CNAME might be cached longer than desired If this value is set too low additional overhead might be placed on your c lient applications by having to resolve this CNAME repeatedly Though use cases differ a TTL of 5 seconds is usually a good place to start Planning your database migration process The previous section discussed some of the key considerations to take int o account while migrating databases to Amazon Aurora Once you have determined that Aurora is the right fit for your application the next step is to decide on a preliminary migration approach and create a database migration plan Homogen eous migration If your source database is a MySQL 56 or 57 compliant database (MySQL MariaDB Percona and so on ) then migration to Aurora is quite straightforward Homogen eous migration with downtime If your application can accommodate a predictable length of downtime during off peak hours migration with the downtime is the simplest option and is a highly recommended approach Most database migration projects fall into this category as most applications already have a well defined maintenance window You have the foll owing options to migrate your database with downtime • RDS snapshot migration − If your source database is running on Amazon RDS MySQL 56 or 57 you can simply migrate a snapshot of that database to Amazon Aurora For migrations with downtime you either have to stop your application or stop writing to the database while snapshot a nd migration is in progress The time to migrate primarily depends upon the size of the database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 8 and can be determined ahead of the production migration by running a test migration Snapshot migration option is explained in the RDS Snapshot Migration section • Migration using native MySQL tools — You may use native MySQL tools to migrate your data and schema to Aurora This is a great option when you need more control over the database migration process you are mo re comfortable using native MySQL tools and other migration methods are not performing as well for your use case You can create a dump of your data using the mysqldump utility and then import that data into an existing Amazon Aurora MySQL DB cluster Fo r more information see Migrating from MySQL to Amazon Aurora by using mysqldump You can copy th e full and incremental backup files from your database to an Amazon S3 bucket and then restore an Amazon Aurora MySQL DB cluster from those files This option can be considerably faster than migrating data using mysqldump For more information see Migrating data from MySQL by using an Amazon S3 bucket • Migration using AWS Database Migration Service (AWS DM S) — Onetime migration using AWS DMS is another tool for moving your source database to Amazon Aurora Before you can use AWS DMS to move the data you need to copy the database schema from source to target using native MySQL tools For the step bystep p rocess see the Migrating Data section Using AWS DMS is a great option when you don’t have experience using native MySQL tools Homogen eous migration with nearzero downtime In some scenarios you might want to m igrate your database to Aurora with minimal downtime Here are two e xamples: • When your database is relatively large and the migration time using downtime options is longer than your application maintenance window • When you want to run source and target data bases in parallel for testing purposes In such cases you can replicate changes from your source MySQL database to Aurora in real time using replication You have a couple of options to choose from: • Near zero downtime migration using MySQL binlog replication — Amazon Aurora supports traditional MySQL binlog replication If you are running MySQL database chances are that you are already familiar with classic binlog replication setup If that’s the case and you want more control over the migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 9 onetime database load using native tools coupled wi th binlog replication gives you a familiar migration path to Aurora • Near zero downtime migration using AWS Database Migration Service (AWS DMS) — In addition to supporting one time migration AWS DMS also supports real time data replication using change d ata capture (CDC) from source to target AWS DMS takes care of the complexities related to initial data copy setting up replication instances and monitoring replication After the initial database migration is complete the target database remains synchr onized with the source for as long as you choose If you are not familiar with binlog replication AWS DMS is the next best option for homogenous near zero downtime migrations to Amazon Aurora See the section Introduction and General Approach to AWS DMS • Near zero downtime migration using Aurora Read Replica — If your source database is running on Amazon RDS MySQL 56 or 57 you can migrate from a MySQL DB instance to an Aurora MySQL DB cluster by creating an A urora read replica of your source MySQL DB instance When the replica lag between the MySQL DB instance and the Aurora Read Replica is zero you can direct your client applications to the Aurora read replica This migration option is explained in the Migrate using Aurora Read Replica section Heterogeneous migration If you are looking to migrate a non MySQL compliant database (Oracle SQL Server PostgresSQL and so on ) to Amazon Aurora several options can help you accomplish this migration quickly and easily Schema migration Schema migration from a non MySQL compliant database to Amazon Aurora can be achieved using the AWS Schema Conversion Tool This tool is a desktop application that helps you convert your datab ase schema from an Oracle Microsoft SQL Server or PostgreSQL database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster In cases where the schema from your source database cannot be automatically and completely converted the AWS Schema Conversion Tool provides guidance on how you can create the equivalent schema in your target Amazon RDS database For details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 10 Data migration While supporting homo genous migrations with near zero downtime AWS Database Migration Service ( AWS DMS) also supports continuous replication across heterogeneous databases and is a preferred option to move your source database to your target database for both migrations with downtime and migrations with near zero downtime Once the migration has started AWS DMS manages all the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target Besides using AWS DMS you can use various third party tools like Attunity Replicate Tungsten Replicator Oracle Golden Gate etc to migrate your data to Amazon Aurora Whatever tool you choose take performance and licensing costs into consideration before finalizing your toolset for migration Migrating large databases to A mazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication — Large database s typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replication (using MySQL native tools AWS DMS or third party tools) fo r changes to catch up • Copy static tables first — If your database relies on large static tables with reference data you may migrate these large tables to the target database before migrating your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration — Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is no t a common migration pattern this is an option nonetheless This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 11 • Database cleanup — Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply for get to drop unused tables Whatever the reason a database migration project provides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files Partition and shard consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an o pportunity to consolidate these partitions or shards on a single Aurora database A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL dat abase Consolidating these partitions on a single Aurora instance not only reduce s the total cost of ownership and simplify database management but it also significantly improve s performance of cross partition queries • Functional partitions — Functional partitioning means dedicating different nodes to different tasks For example in an ecommerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partiti ons usually have distinct nonoverlapping schemas • Consolidation strategy — Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replic ation • Data shards — If you have the same schema with distinct sets of data across multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while kee ping the same table schema • Consolidation strategy — Since all shards share the same database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 12 migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing Migration options at a glance Table 1 — Migration options Source database type Migration with downtime Near zero downtime migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2 : Manual migration using native tools* Option 3 : Schema migration using native tools and data load using AWS DMS Option 1 : Migration using native tools + bin log replication Option 2: Migrate using Aurora Read Replica Option 3 : Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or on premises Option 1 : Migration using native tools Option 2 : Schema migration with native tools + AWS DMS for data load Option 1 : Migration using native tools + binlog replication Option 2: Schema migration using native tools + AWS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication Other non MySQL databases Option: Manual or third party tool for schema conversion + manual or third party data load in target Option: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication (GoldenGate etc) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 13 *MySQL Native tools: mysqldump SELECT INTO OUTFILE third party tools like mydumper/myloader RDS snapshot migration To use RDS snapshot migration to move to Aurora your MySQL database must be running on Amazon RDS MySQL 56 or 57 and you must make an RDS snapshot of the database This migration method does not work with on premise s databases or databases ru nning on Amazon Elastic Compute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite The biggest advantage to this migration method is that it is t he simplest and requires the fewest number of steps In particular it migrate s over all schema objects secondary indexes and stored procedures along with all of the database data During snapshot migration without binlog replication your source databas e must either be offline or in a read only mode (so that no changes are being made to the source database during migration) To estimate downtime you can simply use the existing snapshot of your database to do a test migration If the migration time fits within your downtime requirements then this may be the best method for you Note that i n some cases migration using AWS DMS or native migration tools can be faster than using snapshot migration If you can’t tolerate extended downtime you can achieve n earzero downtime by creating an Aurora Read Replica from a source RDS MySQL This migration option is explained in Migrating using Aurora Read Replica section in this document You can migrate either a manual or an automated DB snapshot The general steps you must take are as follows: 1 Determine the amount of space that is required to migrate your Amazon RDS MySQL instance to an Aurora DB cluster For more information see the next section 2 Use the Amazon RDS console to create the snapshot in the Region where the Amazon RDS MySQL instance is located 3 Use the Migrate Databas e feature on the console to create an Amazon Aurora DB cluster that will be populated using the DB snapshot from the original DB instance of MySQL This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 14 Note: Some MyISAM tables m ight not convert without errors and may require manual changes For instance the InnoDB engine does not permit an autoincrement field to be part of a composite key Also spatial indexes are not currently supported Estimating space requirements for snapshot migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Amazon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to for mat the data for migration The two features that can potentially cause space issues during migration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip thi s section because you should not have space issues During migration MyISAM tables are converted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapshot was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in th e original database then migration should succeed without encountering any space issues However if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables t hen the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the da tabase and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 128 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a maximum size of 16 TB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 15 • NonMyISAM tables in the source database can be up to 16 TB in size However due to additional space requirements during conversion make s ure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exceed 8 TB in size You might want to modify your database schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Am azon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need to provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon R elational Database Service User Guide Migrating a DB snapshot using the console You can migrate a DB snapshot of an Amazon RDS MySQL DB instance to create an Aurora DB cluster The new DB cluster is populated with the data from the original Amazon RDS MySQL DB instance The DB snapshot must have been made from an RDS DB instance runni ng MySQL 56 or 57 For information about creating a DB snapshot see Creating a DB snapshot in the Amazon RDS User Guide If the DB snapshot is not in the Region where you want to locate your Aurora DB cluster use the Amazon RDS console to copy the DB snapshot to that Region For information about copying a DB snapshot see Copying a snapshot in Amazon RDS User Guide To migrate a MySQL DB snapshot by using the console do the following: 1 Sign in to the AWS Management Console and open the Amazon RDS console (sign in requ ired) 2 Choose Snapshots 3 On the Snapshots page choose the Amazon RDS MySQL snapshot that you want to migrate into an Aurora DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 16 4 Choose Migrate Database 5 On the Migrate Database page specify the values that match your environment and processing requirements as shown in the following illustration For descriptions of these options see Migrating an RDS for MySQL snapshot to Aurora in the Amazon RDS User Guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrati ng Your Databases to Amazon Aurora 18 Figure 2 — Snapshot migration 6 Choose Migrate to migrate your DB snapshot In the list of instances c hoose the appropriate arrow icon to show the DB cluster details and monitor the progress of the migration This details pa nel displays the cluster endpoint used to connect to the prima ry instance of the DB cluster For more information on connecting to an Amazon Aurora DB cluster see Connecting to an Amazon Aurora DB Cluster in the Amazon R elational Database Service User Guide Migration using Aurora Read Replica Aurora uses MySQL DB engines binary log replication functionality to create a special type of DB cluster called an Aurora read replica for a source MySQL DB instance Updates made to the source in stance are asynchronously replicated to Aurora Read Replica This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 19 We recommend creating an Aurora read replica of your source MySQL DB instance to migrate to an Aurora MySQL DB cluster with near zero downtime The migration process begins by creating a DB snaps hot of the existing DB Instance as the basis for a fresh Aurora Read Replica After the replica is set up replication is used to bring it up to date with respect to the source Once the replication lag drops to zero the replication is complete At this point you can promote the Aurora Read Replica into a standalone Aurora DB cluster and point your client applications to it Migration will take a while roughly several hours per tebibyte (TiB) of data Replication runs somewhat faster for InnoDB tables t han it does for MyISAM tables and also benefits from the presence of uncompressed tables If migration speed is a factor you can improve it by moving your MyISAM tables to InnoDB tables and uncompressing any compressed tables For further details refer to Migrating from a MySQL DB instance to Aurora MySQL using Read Replica in the Amazon RDS User Guide To use Aurora Read Replica to migrate from RDS MySQL your MySQL database must be running on Amazon RDS MySQL 56 or 57 This migration method does not work with on premises databases or databases running on Amazon Elastic C ompute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite Create a read replica using the Console 1 To migrate an existing RDS MySQL DB Instance s imply select the instance in the AWS Management RDS Console (sign in required) choose Instance Actions and choose Create Aurora read replica : This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 20 2 Specify the Values for the Aurora cluster See Replication with Amazon Aurora Monitor the progress of the migration in the console You can also look at the sequence of e vents in RDS events console 3 After the migration is complete wait for the Replica lag to reach zero on the new Aurora read replica to indicate that the replica is in sync with the source 4 Stop the flow of new transactions to the source MySQL DB instance 5 Promote the Aurora read replica to a standalone DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Da tabases to Amazon Aurora 21 6 To see if the process is complete you can check Recent events for the new Aurora cluster: Now you can point your application to use the Aurora cluster’s reader and writer endpoints Migrating the database schema RDS DB s napshot migration migrates both the full schema and data to the new Aurora instance However if your source database location or application uptime requirements do not allow the use of RDS snapshot migratio n then you first need to migrate the database schema from the source database to the target database before you can move the actual data A database schema is a skeleton structure that represents the logical view of the entire database and typically incl udes the following : • Database storage objects — tables columns constraints indexes sequences userdefined types and data types • Database code objects — functions procedures packages triggers views materialized views events SQL scalar functions SQL inline functions SQL table functions attributes variables constants table types public types private types cursors exceptions parameters and other objects In most situations the database schema remains relatively static and therefore you don’t need downtime during the database schema migration step The schema from your source database can be extracted while your source database is up and ru nning without affecting the performance If your application or developers do make frequent changes to the database schema make sure that these changes are either paused while the migration is in process or are accounted for during the schema migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 22 Depending on the type of your source database you can use the techniques discussed in the next sections to migrate the database schema As a prerequisite to schema migration you must have a target Aurora database created and available Homogen eous schema migration If your source database is MySQL 56 compliant and is running on Amazon RDS Amazon EC2 or outside AWS you can use native MySQL tools to export and import the schema • Exporting database schema — You can use the mysqldump client utility to export the database schema To run this utility you need to connect to your source database and redirect the output of mysqldump command to a flat file The –nodata option ensu res that only database schema is exported without any actual table data For the complete mysqldump command reference see mysqldump — A Database Backup Program mysqldump –u source_db_username –p nodata routines triggers –databases source_db_name > DBSchemasql • Importing database schema into Aurora — To import the schema to your Aurora instance connect to your Aurora database from a MySQL command line client (or a corresponding Windows client) and direct the contents of the export file into MySQL mysql –h aurora clusterendpoint u username p < DBSchemasql Note the following: • If your source database contains stored procedures triggers and views you need to remove DEFINER syntax from your dump file A simple Perl command to do that is given below Doing this creates all triggers views and sto red procedures with the current connected user as DEFINER Be sure to evaluate any security implications this might have $perl pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' < DBSchemasql > DBSchemaWithoutDEFINERsql This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 23 • Amazon Aurora supports InnoDB tables only If y ou have MyISAM tables in your source database Aurora automatically changes the engine to InnoDB when the CREATE TABLE command is run • Amazon Aurora does not support compressed tables (that is tables created with ROW_FORMAT=COMPRESSED ) If you have compr essed tables in your source database Aurora automatically changes ROW_FORMAT to COMPACT when the CREATE TABLE command is run Once you have successfully imported the schema into Am azon Aurora from your MySQL 56 compliant source database the next step is to copy the actual data from the source to the target For more information s ee the Introduction and General Approach to AWS DMS later in this paper Heterogeneous schema migration If your source database isn’t MySQL compatible you must convert your schema to a format compatible with Amazon Aurora Schema conversion from one database engine to another database engine is a nontrivial task and may involve rewriting certain parts of your database and application c ode You have two main options for converting and migrating your schema to Amazon Aurora: • AWS Schema Conversion Tool — The AWS Schema Conversion Tool makes heterogeneous database migrations easy by automatically converting the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with the target database Any code that cannot be automatically converted is clearly marked so that it can be manually converted You can use this tool to convert your source databases running on either Oracle or Microsoft SQL Server to an Amazon Aurora MySQL or PostgreSQL target database in either A mazon RDS or Amazon EC2 Using the AWS Schema Conversion Tool to convert your Oracle SQL Server or PostgreSQL schema to an Aurora compatible format is the preferred method This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 24 • Manual schema migration and third party tools — If your source database is not O racle SQL Server or PostgreSQL you can either manually migrate your source database schema to Aurora or use third party tools to migrate schema to a format that is compatible with MySQL 56 Manual schema migration can be a fairly involved process depen ding on the size and complexity of your source schema In most cases however manual schema conversion is worth the effort given the cost savings performance benefits and other improvements that come with Amazon Aurora Schema migration using the AWS Sc hema Conversion Tool The AWS Schema Conversion Tool provides a project based user interface to automatically convert the database schema of your source database into a format that is compatible with Amazon Aurora It is highly recommended that you use AWS Schema Conversion Tool to evaluate the database migration effort and for pilot migration before the actual production migration The following description walks you through the high level steps of using AWS the Schema Conversion Tool For detailed instruc tions see the AWS Schema Conversion Tool User Guide 1 First install the t ool The AWS Schema Conversion Tool is available for th e Microsoft Windows macOS X Ubuntu Linux and Fedora Linux Detailed download and installation instructions can be found in the installation and update section of the user guide Where you install AWS Schema Conversion Tool is important The tool need s to connect to both source and target databases directly in order to convert and apply schema Make sure that the desktop where you install AWS Schema Conv ersion Tool has network connectivity with the source and target databases 2 Install JDBC drivers The AWS Schema Conversion Tool uses JDBC drivers to connect to the source and target databases In order to use this tool you must download these JDBC drivers to your local desktop Instructions for driver download can be found at Installing the required database drivers in the AWS Schema Conversion Tool User Guide Also check the AWS forum for AWS Schema Conversion Tool for instructions on setting up JDBC drivers for different database engines This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 25 3 Create a target database Create an Amazon Aurora target database For instructions on creating an Amazon Aurora database see Creating an Amazon Aurora DB Cluster in the Amazon RDS User Guide 4 Open the AWS Schema Conversion Tool and start the Ne w Project Wizard Figure 3 — Create a new AWS Schema Conversion Tool project 5 Configure the source database and test connectivity between AWS Schema Conversion Tool and the source database Your source database must be reachable from your desktop for this to work so make sure that you have the appropriate network and firewall setti ngs in place This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 26 Figure 4 — Create New Database Migration Project wizard 6 In the next screen select the schema of your source database that you want to convert to Amazon Aurora Figure 5 — Select Schema step of the migration wizard This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Au rora 27 7 Run the d atabase migration assessment report This report provides important information regarding the conversion of the schema from your source database to your target Amazon Aurora instance It summarizes all of the sc hema conversion tasks and details the action items for parts of the schema that cannot be automatically converted to Aurora The report also includes estimates of the amount of effort that it will take to write the equivalent code in your target database t hat could not be automatically converted 8 Choose Next to configure the target database You can view this migration report again later Figure 6 — Migration report 9 Configure the target Amazon Aurora database and test connectivi ty between the AWS Schema Conversion Tool and the source database Your target database must be reachable from your desktop for this to work so make sure that you have appropriate network and firewall settings in place 10 Choose Finish to go to the project window 11 Once you are at the project window you have already established a connection to the source and target database and are now ready to evaluate the detailed assessment report and migrate the schema This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 28 12 In the left panel that displays the schema from y our source database choose a schema object to create an assessment report for Right click the object and choose Create Report Figure 7 — Create migration report The Summary tab displays the summary information from the database migration assessment report It shows items that were automatically converted and items that could not be automatically converted For schema items that could not be automatically converted to the tar get database engine the summary includes an estimate of the effort that it would take to create a schema that is equivalent to your source database in your target DB instance The report categorizes the estimated time to convert these schema items as foll ows: • Simple – Actions that can be completed in less than one hour • Medium – Actions that are more complex and can be completed in one to four hours • Significant – Actions that are very complex and will take more than four hours to complete This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 29 Figure 8 — Migration report Important: If you are evaluating the effort required for your database migration project this assessment report is an important artifact to consider Study the assessment report in details to determine what code changes are required in the database schema and what impact the changes might have on your application functionality and design 13 The next step is to convert the schema The converted schema is not immediately applied to the target database Inst ead it is stored locally until you explicitly apply the converted schema to the target database To convert the schema from your source database choose a schema object to convert from the left panel of your project Right click the object and choose Conv ert schema as shown in the following illustration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 30 Figure 9 — Convert schema This action add s converted schema to the right panel of the project window and show s objects that were automatically converted by the AWS Schema Conversion Tool You can respond to the action items in the assessment report in different ways: • Add equivalent schema manually — You can write the portion of the schema that can be automatically converted to your target DB instance by choosing Apply to database in the right panel of your project The schema that is written to your target DB instance won't contain the items that couldn't be automatically converted Those items are listed in your d atabase migration assessment report After applying the schema to your target DB instance you can then manually create the schema in your target DB instance for the items that could not be automatically converted In some cases you may not be able to cre ate an equivalent schema in your target DB instance You might need to redesign a portion of your application and database to use the functionality that is available from the DB engine for your target DB instance In other cases you can simply ignore the schema that can't be automatically converted Caution: If you manually create the schema in your target DB instance do not choose Apply to database until after you have saved a copy of any manual work that you have done Applying the schema from your project to your target DB instance overwrites schema of the same name in the target DB instance and you lose any updates that you added manually This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 31 • Modify your source database schema and refresh the schema in your project — For some items you might be best ser ved to modify the database schema in your source database to the schema that is compatible with your application architecture and that can also be automatically converted to the DB engine of your target DB instance After updating the schema in your source database and verifying that the updates are compatible with your application choose Refresh from Database in the left panel of your project to update the schema from your source database You can then convert your updated schema and generate the database migration assessment report again The action item for your updated schema no longer appears 14 When you are ready to apply your converted schema to your target Aurora instance choose the schema element from the right panel of your project Right click the schema element and choose Apply to database as shown in the following figure Figure 10 — Apply schema to database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 32 Note: The first time that you apply your converted schema to your target DB instance the AWS Schema Conversion Tool adds an additional schema (AWS_ORACLE_EXT or AWS_SQLSERVER_EXT ) to your target DB instance This schema implements system functions of the sourc e database that are required when writing your converted schema to your target DB instance Do not modify this schema or you might encounter unexpected results in the converted schema that is written to your target DB instance When your schema is fully m igrated to your target DB instance and you no longer need the AWS Schema Conversion Tool you can delete the AWS_ORACLE_EXT or AWS_SQLSERVER_EXT schema The AWS Schema Conversion Tool is an easy touse addition to your migration toolkit For additional be st practices related to AWS Schema Conversion Tool see the Best practices for the AWS SCT topic in the AWS Schema Conversion Tool User Guide Migrat ing data After the database schema has been copied from the source database to the target Aurora database the next step is to migrate actual data from source to target While data migration can be accomplished using different tools we recommend moving data using the AWS Database Migration Service (AWS DMS) as it provides both the simplicity and the features needed for the task at hand Introduction and general approach to AWS DMS The AWS Database Migration Service ( AWS DMS) makes it easy for customers to migrate production databases to AWS with minimal downtime You can keep your applications running while you are migrati ng your database In addition the AWS Database Migration Service ensures that data changes to the source database that occur during and after the migration are continuously replicated to the target Migration tasks can be set up in minutes in the AWS Mana gement Console The AWS Database Migration Service can migrate your data to and from widely used database platforms such as Oracle SQL Server MySQL PostgreSQL Amazon Aurora MariaDB and Amazon Redshift The service supports homogenous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or SQL Server to MySQL You can perform one time migrations or you This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 33 can maintain continuous replication between databases without a customer having to install or configure any complex software AWS DMS works with databases that are on premise s running on Amazon EC2 or running on Amazon RDS However AWS DMS does not work in situation s where both the source database and the target database are on premise s; one endpoint must be in AWS AWS DMS supports specific versions of Oracle SQL Server Amazon Aurora MySQL and PostgreSQL For currently supported versions see the Sources for data migration However this whitepaper is just focusing on Amazon Aurora as a migration target Migration methods AWS DMS provides three methods for migrating data: • Migrate existing data — This method creates the tables in the target database automatically defines the metadata that is required at the target and populates the tables with data from the source database (also referred to as a “ full load”) The data from the tables is loaded in parallel for improved efficiency Tables are only created in case of homogenous migrations and secondary indexes aren’t created automatically by AWS DMS Read further for details • Migrate existing data and replicate ongoing change s — This method does a full load as described above and in addition captures any ongoing changes being made to the source database during the full load and stores them on the replication instance Once the full load is complete the stored changes are applied to the destination database until it has been brought up to date with the source database Additionally any ongoing changes being made to the source database continue to be replicated to the destinatio n database to keep them in sync This migration method is very useful when you want to perform a database migration with very little downtime • Replicate data changes only — This method just reads changes from the recovery log file of the source database and applies these changes to the target database o n an ongoing basis If the target database is unavailable these changes are buffered on the replication instance until the target becomes available This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 34 • When AWS DMS is performing a full load migration the processing put s a load on the tables in the source d atabase which could affect the performance of applications that are hitting this database at the same time If this is an issue and you cannot shut down your applications during the migration you can consider the following approaches: o Running the migrat ion at a time when the application load on the database is at its lowest point o Creating a read replica of your source database and then performing the AWS DMS migration from the read replica Migration procedure The general outline for using AWS DMS is as follows: 1 Create a target database 2 Copy the schema 3 Create an AWS DMS replication instance 4 Define the database source and target endpoints 5 Create and run a migration task Create target database Create your target Amazon Aurora database cluster using th e procedure outlined in Creating an Amazon Aurora DB Cluster You should create the target database in the Region and with an instance type that matches your business requirements Also to improve the performance of the migration verify that your target database does not have Multi AZ deployment enabled ; you can enable that once the load has finish ed Copy schema Additionally you should create the schema in this target database AWS DMS supports basic schema migration including the creation of tables and primary keys However AWS DMS doesn't automatically create secondary indexes foreign keys stored proced ures user accounts and so on in the target database For full schema migration details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 35 Create an AWS DMS replication instance In order to use the AWS DMS servi ce you must create a n AWS DMS replication instance which runs in your VPC This instance read s the data from the source database perform s the specified table mappings and write s the data to the target database In general using a larger replication in stance size speed s up the database migration (although the migration can also be gated by other factors such as the capacity of the source and target databases connection latency and so on ) Also your replication instance can be stopped once your datab ase migration is complete Figure 11 — AWS Database Migration Service AWS DMS currently supports burstable compute and memory optimized instance classes for replication instances The burstable instance classes are low cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline They are suitable for developing con figuring and testing your database migration process as well as for periodic data migration tasks that can benefit from the CPU burst capability The compute optimized instance classes are designed to deliver the highest level of processor performance an d achieve significantly higher packet per second (PPS) performance lower network jitter and lower network latency You should use this instance class if you are performing large heterogeneous migrations and want to minimize the migration time The memor yoptimized instance classes are designed for migrations or replications of highthroughput transaction systems which can consume large amounts of CPU and memory AWS DMS Storage is primarily consumed by log files and cached transactions Normally doing a full load does not require a significant amount of instance storage on your AWS DMS replication instance However if you are doing replication along with your full load then the changes to the source database are stored on the AWS DMS replication insta nce while the full load is taking place If you are migrating a very large This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 36 source database that is also receiving a lot of updates while the migration is in progress then a significant amount of instance storage could be consumed The instances come with 50 GB of instance storage but can be scaled up as appropriate Normally this amount of storage should be more than adequate for most migration scenarios However it's always a good idea to pay attention to storage related metrics Make sure to scale up your storage if you find you are consuming more than the default allocation Also in some extreme cases where very large databases with very high transaction rates are being migrated with replication enabled it is possible that the AWS DMS replication ma y not be able to catch up in time If you encounter this situation it may be necessary to stop the changes to the source database for some number of minutes in order for the replication to catch up before you repoint your application to the target Aurora DB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 37 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 38 Figure 12 — Create replication instance page in the AWS DMS console Define database source and target endpoints A database endpoint is used by the replication instance to connect to a database To perform a database migrati on you must create both a source database endpoint and a target database endpoint The specified database endpoints can be on premise s running on Amazon EC2 or running on Amazon RDS but the source and target cannot both be on premise s We highly recommended that you test your database endpoint connection after you define it The same page used to create a database endpoint can also be used to test it as explained later in this paper Note: If you have foreign key constraints in your sourc e schema when creating your target endpoint you need to enter the following for Extra connection attributes in the Advanced section: initstmt=SET FOREIGN_KEY_CHECKS=0 This disables the foreign key checks while the target tables are being loaded This in t urn prevents the load from being interrupted by failed foreign key checks on partially loaded tables This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 39 Figure 13 — Create database endpoint page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 40 Create and run a migration task Now that you have created and tested your source database endpoint and your target database endpoint you can create a task to do the data migration When you create a task you specify the replication instance that you have created the database migration method type (discussed earlie r) the source database endpoint and your target database endpoint for your Amazon Aurora database cluster Also under Task Settings if you have already created the full schema in the target database then you should change the Target table preparation mode to Do nothing rather than using the default value of Drop tables on target The latter can cause you to lose aspects of your schema definition like foreign key constraints when it drops and recreates tables When creating a task you can create table mappings that specify the source schema along with the corresponding tables to be migrated to the target endpoint The default mapping method migrate s all source tables to target tables of the same name if they exist Otherwise i t create s the source table(s) on the target (depending on your task settings) Additionally you can create custom mappings (using a JSON file) if you want to migrate only certain tables or if you want to have more control over the field and table mapping process You can also choose to migrate only one schema or all schemas from your source endpoint This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 41 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 42 Figure 14 — Create task page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migra ting Your Databases to Amazon Aurora 43 You can use the AWS Management Console to monitor the progress of your AWS Database Migration Service (AWS DMS) tasks You can also monitor the resources and network connectivity used The AWS DMS console shows basic statistics for each task including the task status percent complete elapsed ti me and table statistics as the following image shows Additionally you can select a task and display performance metrics for that task including throughput records per second migrated disk and memory use and latency Figure 15 — Task status in AWS DMS console Testing and cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are now ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 44 Migration testing Table 2 — Migration testing Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically run upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are s ome common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not succes sful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional tests These post cutover tests exercise the functionality of the application(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests These post cutover tests assess the nonfunctional characte ristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be run by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have com pleted the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 45 cutover If the planning and testing phase has been run properly cutover should not lead to unexpected issues Precutover actions • Choose a cutover window — Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) • Make sure changes are caught up — If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significa ntly lagging behind the source database • Prepare scripts to make the application configuration changes — In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applicati ons may require updates to connection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application — Stop the application processes on the source database and p ut the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Run precutover tests — Run automated pre cutover tests to make sure that the data migration was successful Cutover • Run cutover — If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Run scripts created in the pre cutover phase to change the application configuration to point to the new Aurora database • Start your application — At this point you may start your application If you have an ability to stop users from accessing the application w hile the application is running exercise that option until you have run your post cutover checks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 46 Post cutover checks • Run post cutover tests — Run predefined automated or manual test cases to make sure your application works as expected with the new database It’s a good strategy to start testing read only functionality of the database first before running tests that write to the database • Enable user access and closely monitor — If your test cases were run successfully you may give user access to the application to complete the migration process Both application and database should be closely monitored at this time Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and great er availability than other open source databases and lower costs than most commercial grade databases This paper propose s strategies for identifying the best method to migrate databases to Amazon Aurora and detail s the procedures for planning and completing those migrations In particular AWS Database Migration Service ( AWS DMS) as well as the AWS Schema Conversion Tool are the recommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Contributors Contributors to this documen t include : • Puneet Agarwal Solutions Architect Amazon Web Services • Chetan Nandikanti Database Specialist Solutions Architect Amazon Web Services • Scott Williams Solutions Architect Amazon Web Services Jonathan Doe Solutions Architect Amazon Web Servi ces Further reading For additional information see: • Amazon Aurora Product Details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 47 • Amazon Aurora FAQs • AWS Database Migration Service • AWS Database Migration Service FAQs Document history Date Description July 28 2021 Reviewed for technical accuracy June 10 2016 First publication
|
General
|
consultant
|
Best Practices
|
Migrating_Your_Existing_Applications_to_the_AWS_Cloud
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 1 of 23 Migrating your Existing Applications to the AWS Cloud A Phasedriven Approach to Cloud Migration Jinesh Varia jvaria@amazoncom October 2010 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 2 of 23 Abstract With Amazon Web Services (AWS) you can provision compute power storage and other resources gaining access to a suite of elastic IT infrastructure services as your business demands them With minimal cost and effort you can move your application to the AWS cloud and reduce capital expenses minimize support and administrative costs and retain the performance security and reliability requirements your business demands This paper helps you build a migration strategy for your company It discuss es steps techniques and methodologies for moving your existing enterprise applications to the AWS cloud To get the most from this paper you should have basic understanding of the different products and features from Amazon Web Services There are several strategies for migrating applications to new environments In this paper we shall share several such strategies that help enterprise companies take advantage of the cloud We discuss a phasedriven step bystep strategy for migrating applications to the cloud More and more enterprises are moving applications to the cloud to modernize their current IT asset base or to prepare for future needs They are taking the plunge picking up a few missioncritical applications to move to the cloud and quickly realizing that there are other applications that are also a good fit for the cloud To illustrate the step bystep strategy we provide three scenarios listed in the table Each scenario discusses the motivation for the migration describes the before and after application architecture details the migration process and summarizes the technical benefits of migration: Scenario Name Solution Use case Motivation For migration Additional Benefits Services Used Company A Web Application Marketing and collaboration Web site Scalability + Elasticity Auto Scaling pro active event based scaling EC2 S3 EBS SimpleDB AS ELB CW RDS Company B Batch processing pipeline Digital Asset Management S olution Faster time to market Automation and improved development productivity EC2 EBS S3 SQS Company C Backend processing workflow Claims Processing System Lower TCO Redundancy Business continuity and Overflow protection EC2 S3 EBS AS SQS IE This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 3 of 23 Introduction Developers and architects looking to build new applications in the cloud can simply design the components processes and workflow for their solution employ the APIs of the cloud of their choice and leverage the latest cloudbased best practices1 for design development testing and deployment In choosing to deploy their solutions in a cloudbased infrastructure like Amazon Web Services (AWS) they can take immediate advantage of instant scalability and elasticity isolated processes reduced operational effort ondemand provisioning and automation At the same time many businesses are looking for better ways to migrate their existing applications to a cloudbased infrastructure so that they too can enjoy the same advantages seen with greenfield application development One of the key differen tiators of AWS’ infrastructure services is its flexibility It gives businesses the freedom of choice to choose the programming models languages operating systems and databases they are already using or familiar with As a result many organizations are moving existing applications to the cloud today It is true that some applications (“IT assets”) currently deployed in company data centers or co located facilities might not make technical or business sense to move to the cloud or at least not yet Those assets can continue to stay in place However we strongly believe that there are several assets within an organization that can be moved to the cloud today with minimal effort This paper will help you build an enterprise application migration strategy for your organization The step by step phasedriven approach discussed in the paper will help you identify ideal projects for migration build the necessary support within the organization and migrate applications with confidence Many organizations are taking incremental approach to cloud migration It is very important to understand that with any migration whether related to the cloud or not there are onetime costs involved as well as resistance to change among the staff members (cultural and sociopolitical impedance) While these costs and factors are outside the scope of this technical paper you are advised to take into consideration these issues Begin by building organizational support by evangelizing and training Focus on longterm ROI as well as tangible and intangible factors of moving to the cloud and be aware of the latest developments in the cloud so that you can take full advantage of the cloud benefits There is no doubt that deploying your applications in the AWS cloud can lower your infrastructure costs increases business agility and remove the undifferentiated “heavy lifting” within the enterprise A successful migration largely depends on three things: the complexity of the application architecture; how loosely coupled your application is; and how much effort you are willing to put into migration We have noticed that when customers have followed the step by step approach (discussed in this paper) and have invested time and resources towards building proof of concept projects they clearly see the tremendous potential of AWS and are able to leverage its strengths very quickly 1 Architecting for the Cloud: Best Practices Whitepaper http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 4 of 23 A Phased Strate gy for Migration: Step By Step G uide Figure 1: The Phase Driven Approach to Cloud Migration Phases Benefits Cloud Assessment Financial Assessment (TCO calculation) Security and C ompliance Assessment Technical Assessment (Classify application types) Identify the tools that can be reused and the tools that need to be built Migrate licensed products Create a plan and measure successBusiness case for migration (Lower TCO faster time to market higher flexibility & agility s calability + elasticity) Identify gaps between your current traditional legacy architecture and next generation cloud architecture Proof of Concept Get your feet wet with AWS Build a p ilot and validate the technology Test existing software in the cloudBuild confidence with various AWS services Mitigate risk by validating critical pieces of your proposed architecture Moving your Data Understand different storage options in the AWS cloud Migrate file servers to Amazon S3 Migrate commercial RDBMS to EC2 + EBS Migrate MySQL to Amazon RDSRedundancy Durable Storage Elastic Scalable Storage Auto mated Management Backup Moving your Apps Forklift m igration strategy Hybrid migration strategy Build “cloud aware” layers of code as needed Create AMIs for each componentFuture proof scaled out service oriented elastic architecture Leveraging the Cloud Leverage other AWS services Automate elasticity and SDLC Harden s ecurity Create dashboard to manage AWS resources Leverage multiple availability zonesReduction in CapEx in IT Flexibility and agility Automation and improved productivity Higher Availability (HA) Optimization Optimize usage based on demand Improve efficiency Implement a dvanced monitoring and telemetry Reengineer your application Decompose your relational databas esIncreased utilization and transformational impact in OpEx Better visibility through advanced monitoring and telemetry This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 5 of 23 The order of the phases is not important For example several companies prefer to skip Phase 1 (Assessment Phase) and dive right into Phase 2 (Proof of Concept) or perform Application Migration (Phase 4) before they migrate all their data (Phase 3) Phase 1: Cloud Assessment Phase This phase will help you build a business case for moving to the cloud Financial Assessment Weighing the financial considerations of owning and operating a data center or colocated facilities versus employing a cloudbased infrastructure requires detailed and careful analysis In practice it is not as simple as measuring potential hardware expense alongside utility pricing for compute and storage resources Indeed businesses must take a multitude of options into consideration in order to affect a valid comparison between the two alternatives Amazon has published a whitepaper The Economics of the AWS cloud2to help you gather the necessary data for an appropriate comparison This basic TCO methodology and the accompanying Amazon EC2 Cost Calculator uses industry data AWS customer research and userdefined inputs to compare the annual fullyburdened cost of owning operating and maintaining IT infrastructure with the payforuse costs of Amazon EC2 Note that this analysis compares only the direct costs of the IT infrastructure and ignores the many indirect economic benefits of cloud computing including hig h availability reliability scalability flexibility reduced time tomarket and many other cloudoriented benefits Decision makers are encouraged to conduct a separate analysis to quantify the economic value of these features Pricing Model One time Upfront Monthly AWS Colo OnSite AWS Colo OnSite Server Hardware 0 $$$ $$ $$ 0 0 Network Hardware 0 $$ $$ 0 0 0 Hardware Maintenance 0 $$ $$ 0 0 $ Software OS 0 $$ $$ $ 0 0 Power and Cooling 0 0 $$ 0 $$ $ Data Center/C olocated Space 0 $$ $$ 0 $ 0 Administration 0 $$ $$ $ $$ $$$ Storage 0 $$ $$ $ 0 0 Bandwidth 0 $$ $ $$ $ $ Resource Management Software 0 0 0 $ $ $ 24X7 Support 0 0 0 $ $ $ Total Table 1: Cloud TCO Calculation Example (some assumptions are made) The AWS Economics Center provides all the necessary tools you need to assess your current IT infrastructure After you have performed a highlevel financial assessment you can estimate your monthly costs using the AWS Simple Monthly Calculator by entering your realistic usage numbers Project that costs over a period of 1 3 and 5 years and you will notice significant savings 2 http://mediaamazonwebservicescom/The_Economics_of_the_AWS_Cloud_vs_Owned_IT_Infrastructurepdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 6 of 23 Security and Compliance Assessment If your organization has specific IT security policies and compliance requirements we recommend that you involve your security advisers and auditors early in the process At this stage you can ask the following questions: What is my overall risk tolerance? Are there various classifications of my data that result in higher or lower tolerance to exposure? What are my main concerns around confidentiality integrity availability and durability of my data? What are my regulatory or contractual obligations to store data in specific jurisdictions? What are my security threats? What is a likelihood of those threats materializing into actual attacks? Am I concerned about intellectual property protection and legal issues of my application and data? What are my options if I decide that I need to retrieve all of my data back from the cloud? Are there internal organizational issues to address to increase our comfort level with using shared infrastructure services? Data security can be a daunting issue if not properly understood and analyzed Hence it important that you understand your risks threats (and likelihood of those threats) and then based on sensitivity of your data cl assify the data assets into different categories (discussed in the next section) This will help you identify which datasets (or databases) to move to the cloud and which ones to keep inhouse It is also important to understand these important basics regarding AWS Security: You own the data not AWS You choose which geographic location to store the data I t doesn’t move unless you decide to move it You can download or delete your data whenever you like You should consider the sensitivity of your data and decide if and how you will encrypt your data while it is in transit and while it is at rest You can set highly granular permissions to manage access of a user within your organization to specific service operations data and resources in the cloud for greater security control For more uptodate information about certifications and best practices please visit the AWS Security Center Technical and Functional Assessment A technical assessment is required to understand which applications are more suited to the cloud architecturally and strategically At some point enterprises determine which applications to move into the cloud first which applications to move later and which applications should remain inhouse In this stage of the phase enterprise architects should ask the following questions: Which business applications should move to the cloud first? Does the cloud provide all of the infrastructure building blocks we require? Can we reuse our existing resource management and configuration tools? How can we get rid of support contracts for hardware software and network? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 7 of 23 Create a Dependency Tree and a Classification Chart Perform a thorough examination of the logical constructs of your enterprise applications and start classifying your applications based on their dependencies risks and security and compliance requirements Identify the applications and their dependencies on other components and services Create a dependency tree that highlights all the different parts of your applications and identify their upward and downstream dependencies to other applications Create a spreadsheet that lists all your applications and dependencies or simply “white board” your dependency tree that shows the different levels of interconnections of your components This diagram should be an accurate snapshot of your enterprise application assets It may look something like the diagram below It could include all your ERP systems HR services Payroll Batch processing systems backend billing systems and customerfacing web applications internal corporate IT applications CRM systems etc as well as lowerlevel shared services such as LDAP servers Figure 2: Example of whiteboard diagram of all the IT assets and its dependencies (Dependency Tree) Identifying the R ight “ Candidate ” for the Cloud After you have created a dependency tree and have classified your enterprise IT assets examine the upward and downward dependencies of each application so you can determine which of them to move to the cloud quickly For a Web based application or Software as a Service (SaaS) application the dependency tree will consist of logical components (features) of the website such as database search and indexer login and authentication s ervice billing or payments and so on For backend processing pipeline there will be different interconnected processes like workflow systems logging and reporting systems and ERP or CRM systems In most cases the best candidates for the cloud are the services or components that have minimum upward and downward dependencies To begin look for systems that have fewer dependencies on other components Some examples are backup systems batch processing applications log processing systems development testing and build At this stage you will have clear visibility into your IT assets and you might be able to classify your applications into different categories: Applications with Top Secret Secret or Public data sets Applications with low medium a nd high compliance requirements Applications that are internal only partner only or customer facing Applications with low medium and high coupling Applications with strict relaxed licensing …and so on This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 8 of 23 systems webfront (marketing) applications queuing systems content management systems or training and presales demo systems To identify which are good candidates for the cloud search for applications with underutilized assets; applications that have an immediate business need to scale and are running out of capacity ; applications that have architectural flexibility; applications that utilize traditional tape drives to backup data; applications that require global scale (for example customerfacing marketing and advertising apps); or applications that are used by partners Deprioritize applications that require specialized hardware to function (for example mainframe or specialized encryption hardware) Figure 3: Identify the right candidate for the cloud Once you have the list of ideal candidates prioritize your list of applications so that it helps you : maximize the exposure in all aspects of the cloud (compute storage network database) build support and awareness within your organization and creates highest impact and visibility among the key stakeholders Questions to ask at this stage: Are you able to map the current architecture of the candidate application to cloud architecture? If not how much effort would refactoring require? Can your application be packaged into a virtual machine ( VM) instance and run on cloud infrastructure or does it need specialized hardware and/or special access to hardware that the AWS cloud cannot provide? Is your company licensed to move your thirdparty software used in the candidate application into the cloud? How much effort (in terms of building new or modifying existing tools) is required to move the application? Which component must be local ( onpremise) and which can move to the cloud? What are the latency and bandwidth requirements? Does the cloud support the identity and authentication mechanism you require ? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 9 of 23 Identify the Tools That You Can Reuse It is important to research and analyze your existing IT assets Identify the tools that you can reuse in the cloud without any modification and estimate how much effort (in terms of new development and deployment effort) will be required to add “AWS support” to them You might be able to reuse most of the system tools and/or add AWS support very easily All AWS services expose standard SOAP and REST Web Service APIs and provide multiple libraries and SDKs in the programming language of your choice There are some commercial tools that you won’t be able to use in the cloud at this time due to licensing issues so for those you will need to find or build replacements: 1 Resource Management Tools : In the cloud you deal with abstract resources (AMIs Amazon EC2 instances Amazon S3 buckets Amazon EBS volumes and so on) You are likely to need tools to manage these resources For basic management see the AWS management Console 2 Resource Configuration Tools : The AWS cloud is conducive to automation and as such we suggest you consider using tools to help automate the configuration process Take a look at open source tools like Chef Puppet and CFEngine etc 3 System Management Tools : After you deploy your services you might need to modify your existing system management tools (NOC) so that you can effectively monitor deploy and “watch ” the applications in the cloud To manage Amazon Virtual Private Cloud resources you can use the same security policies and use the same system management tools you are using now to manage your own local resources 4 Integration Tools: You will need to identify the framework/library/SDK that works best for you to integrate with AWS services There are libraries and SDKs available in all platforms and programming languages (See Resources section) Also take a look at development productivity tools such as the AWS toolkit for Eclipse Migrating Licensed Products It is important to iron out licensing concerns during the assessment phase Amazon is working with many thirdparty ISVs to smooth the migration path as much as possible Amazon has teamed with a variety of vendors and is currently offering three different options to choose from: 1 Bring Your Own License (BYOL) Amazon has teamed with variety of ISVs who have permitted the use of their product on Amazon EC2 This EC2 based license is the most frictionfree path to move your software into the cloud You purchase the license the traditional way or use your existing license and apply it to the product which is available as a preconfigured Amazon Machine Image For example Oracle Sybase Adobe MySQL JBOSS IBM and Microsoft have made their software and support available in the AWS cloud using BYOL option If you don’t find the softw are that you are looking for in the AWS cloud talk to your software vendor about making their software available in the cloud The AWS Business Development Team is available to help you with this discussion 2 Use a Utility Pricing Model with a Support Package Amazon has teamed with elite ISVs and they are offering their software as a Paid AMI (using the Amazon DevPay service) This is a Pay AsYouGo license in which you do not incur any upfront licensing cost and only pay for the resources you consume ISVs charge a small premium over and above the standard Amazon EC2 cost which gives you an opportunity to run any number of instances in the cloud for the duration you control For example RedHat Novell IBM Wowza offer pay asyougo licenses ISVs typically also offer a support package that goes with pay asyougo license This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 10 of 23 3 Use an ISV SaaSbased Cloud Service Some of the ISVs have offered their software as a service and charge a monthly subscription fee They offer standard APIs and webbased interfaces and are fairly quick to implement This offering is either fully or partially managed inside the AWS cloud This option is often the easiest and fastest way to migrate your existing on premise installation to a hosted ondemand offering by the same vendor or an equivalent offering by a different vendor In most cases ISVs or independent thirdparty enterprise cloud services integrators offer migration tools that can help you move your data For example Mathematica Quantivo Pervasive and Cast Iron provide a SaaS offering based on AWS If your enterprise applications are tightly coupled with complex thirdparty enterprise software systems that have not yet been migrated to the AWS cloud or if you have already invested in multiyear onpremise licensing contracts with the vendor you should consider refactoring your enterprise applications into functional building blocks Run what you can in the cloud and connect to the licensed software systems that still run onpremise Amazon VPC may be used to create an IPSec VPN tunnel that will allow resources running on AWS to communicate securely with resources at the other end of the tunnel in your existing data center The whitepaper3 discusses several ways in which you can extend your existing IT infrastructure to the cloud Define Your Success Criteria While you are at this stage it is important to ask this question: “How will I measure success? ” The following table lists a few examples Your specific success criteria will be customized to your organization’s goals and culture Success Criteria Old New Examples on How to Measure Cost (CapE x) $1M $300K 60% savings in CapEx over next 2 years Cost (OpEx) $20K/Year $10K/Year Server toStaff ratio improved by 2x 4 maintenance contracts discontinued Hardware procurement efficiency 10 machines in 7 months 100 machines in 5 minutes 3000% faster to get resources Time to market 9 months 1 month 80% faster in launching new products Reliability Unknown Redundant 40% reduction in hardware related support calls Availability Unknown At least 9999% uptime 20% reduction in operational support calls Flexibility Fixed Stack Any Stack Not locked in to particular hardware vendor or platform or technology New o pportunities 10 projects backlog 0 backlog 5 new projects identified 25 new projects initiated in 3 months Table 2: Examples on how to measure success criteria Create a Roadmap and a Plan By documenting the dependencies creating a dependency tree and identifying the tools that you need to build or customize you will get an idea of how to prioritize applications for migration estimate the effort required to migrate them understand the onetime costs involved and assess the timeline You can construct a cloud migration roadmap Most companies skip this step and quickly move to the next phase of building a pilot project as it gives a clearer understanding of the technologies and tools 3 http://mediaamazonwebservicescom/Extend_your_IT_infrastructure_with_Amazon_VPCpdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 11 of 23 Phase 2 : Proof of Concept Phase Once you have identified the right candidate for the cloud and estimated the efforts required to migrate it’s time to test the waters with a small proof of concept The goal of this phase is to learn AWS and ensure that your assumptions regarding suitability for migration to the cloud are accurate In this phase you can deploy a small greenfield application and in the process begin to get your feet wet with the AWS cloud Get your feet wet with AWS Get familiar with the AWS API AWS tools SDKs Firefox plugins and most importantly the AWS Management Console and command line tools (See the Getting Started Center for more details) At a minimum at the end of this stage you should know how to use the AWS Management Console (or the Firefox plug ins) and command line tools to do the following: Figure 4: Minimum items to learn about services in a Proof of Concept Learn about the AWS security features Be aware of the AWS security features available today Use them at every stage of the migration process as you see fit During the Proof of Concept Phase learn about the various security features provided by AWS: AWS credentials Multi Factor Authentication (MFA) authentication and authorization At a minimum learn about the AWS Identity and Access Management (IAM) features that allow you to create multiple users and manage the permissions for each of these users within your AWS Account Figure 5 highlights the topics you need to learn regarding IAM: Learn Amazon S3 Create a bucket Upload an object Create a signed URL Create a CloudFront Distribution Learn Amazon EC2 Launch AMI Customize AMI Bundle AMI Launch a customized AMI Learn about Security Groups Test different Availability Zones Create EBS Volume Attach Volume Create Snapshot of a Volume Restore Snapshot Create Elastic IP Map DNS to Elastic IP Learn Amazon RDS Launch a DB Instance Take a backup Scale up vertically Scale out horizontally (more storage) Setup Multi AZ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 12 of 23 Figure 5: Minimum items to learn about security in a Proof of Concept Phase At this stage you want to start thinking about whether you want to create different IAM groups for different business functions within your organization or create groups for different IT roles (admins developers testers etc) and whether you want to create users to match your organization chart or create users for each application Build a Proof OfConcept Build a proof ofconcept that represents a microcosm of your application or which tests critical functionality of your application in the cloud environment Start with a small database (or a dataset); don’t be afraid of launching and terminating instances or stresstesting the system For example if you are thinking of migrating a web application you can start by deploying miniature models of all the pieces of your architecture (database web application load balancer) with minimal data In the process learn how to build a Web Server AMI how to set the security group so that only the web server can talk to the app server how to store all the static files on Amazon S3 and mount an EBS volume to the Amazon EC2 instance how to manage/monitor your application using Amazon CloudWatch and how to use IAM to restrict access to only the services and resources required for your application to function Most of our enterprise customers dive into this stage and reap tremendous value from building pilots We have noticed that customers learn a lot about the capabilities and applicability of AWS during the process and quickly broaden the set of applications that could be migrated into the AWS cloud In this stage you can build support in your organization validate the technology test legacy software in the cloud perform necessary benchmarks and set expectations At the end of this phase you should be able to answer the following questions: Did I learn the basic AWS terminology (instances AMIs volumes snapshots distributions domains and so on)? Did I learn about many different aspects of the AWS cloud (compute storage network database security ) by building this proof of concept ? Will this proof of concept support and create awareness of the power of the AWS cloud within the organization? What is the best way to capture all the lessons that I learned? A whitepaper or internal presentation? How much effort is required to roll this proof ofconcept out to production? Which applications can I immediately move after this proof of concept? After this stage you will have far better visibility into what is available with AWS today You will get handson experience with the new environment which will give you more insight into what hurdles need to be overcome in order to move ahead Learn IAM Create Groups Create a policy Learn about Resources and Conditions Create Users Generate new access credentials Assign users to groups This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 13 of 23 Phase 3: Data Migration Phase In this phase enterprise architects should ask following questions: What are the different storage options available in the cloud today? What are the different RDBMS (commercial and open source) options available in the cloud today? What is my data segmentation strategy? What tradeoffs do I have to make? How much effort (in terms new development oneoff scripts) is required to migrate all my data to the cloud? When choosing the appropriate storage option one size does not fit all There are several dimensions that you might have to consider so that your application can scale to your needs appropriately with minimal effort You have to make the right tradeoffs among various dimensions cost durability queryability availability latency performance (response time) relational (SQL joins) size of object stored (large small) accessibility read heavy vs write heavy update frequency cacheability consistency (strict eventual) and transience (shortlived) Weigh your tradeoffs carefully and decide which ones are right for your application The beauty about AWS is that it does n’t restrict you to use one service or another You can use any number of the AWS storage options in any combination Understand Various Storage Options Available in the AWS Cloud The table will help explain which storage option to use when: Amazon S3 + CloudFront Amazon EC2 Ephemeral Store Amazon EBS Amazon SimpleDB Amazon RDS Ideal for Storing l arge write once read many types of objects Static Content Distribution Storing non persistent transient updates Offinstance persistent storage for any kind of data Query able light weight attribute data Storing and querying structured relational and referential d ata Ideal examples Media files audio video images Backups archives versioning Config d ata scratch files TempDB Clusters boot data Log or data of commercial RDBMS like Oracle DB2 Querying Indexing Mapping taggin g clickstream logs metadata Configuration catalogs Web apps Complex transactional systems inventory management and order fulfillment systems Not recommended for Querying Searching Storing d atabase logs or backups customer data Static data Web facing content key value data Complex joins or transactions BLOBs Relational Typed data Clusters Not recommended examples Database File Systems Shared drives Sensitive data Content Distribution OLTP DW cube rollups Clustered DB Simple lookups Table 3: Data Storage Options in AWS cloud Migrate your Fileserver systems Backups and Tape Drives to Amazon S3 If your existing infrastructure consists of Fileservers Log servers Storage Area Networks (SANs) and systems that are backing up the data using tape drives on a periodic basis you should consider storing this data in Amazon S3 Existing applications can utilize Amazon S3 without major change If your system is generating data every day the recommended migration flow is to point your “pipe” to Amazon S3 so that new data is stored in the cloud right away Then you can have an independent batch process to move old data to Amazon S3 Most enterprises take advantage of their existing encryption tools (256bit AES for data atrest 128bit SSL for data intransit) to encrypt the data before storing it on Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 14 of 23 Understand various RDBMS options in the AWS cloud For your relational database you have multiple options to choose: Amazon RDS RDBMS AMIs 3rd Party Database Service RDBMS MySQL Oracle 11g Microsoft SQL Server MySQL IBM DB2 Sybase Informix PostGreSQL Vertica AsterData Support provided by AWS Premium Support AWS and Vendor Vendor Managed by AWS Yes No No Pricing Model Payasyougo BYOL Pay asyougo Various Scalability Scale compute and storage with a single API call or a click Manual Vendor responsibility Table 4: Relational Database Options Migrate your MySQL Databases to Amazon RDS If you use a standard deployment of MySQL moving to Amazon RDS will be a trivial task Using all the standard tools you will be able to move and restore all the data into an Amazon RDS DB instance After you move the data to a DB instance make sure you are monitoring all the metrics you need It is also highly recommended that you set your retention period so AWS can automatically create periodic backups Migrate your Commercial D atabases to Amazon EC2 using Relational DB AMIs If you require transactional semantics (commit rollback) and are running an OLAP system simply use traditional migration tools available with Oracle MS SQL Server DB2 and Informix All of the major databases are available as Amazon Machine Images and are supported in the cloud by the vendors Migrating your data from an onpremise installation to an Amazon EC2 cloud instance is no different than migrating data from one machine to another Move Large Amounts of Data using Amazon Import/Export Service When transferring data across the Internet becomes cost or time prohibitive you may want to consider the AWS Import/Export service With AWS Import/Export Service you load your data on USB 20 or eSATA storage devices and ship them via a carrier to AWS AWS then uploads the data into your designated buckets in Amazon S3 For example if you have multiple terabytes of log files that need to be analyzed you can copy the files to a supported device and ship the device to AWS AWS will restore all the log files in your designated bucket in Amazon S3 which can then be fetched by your cloudhosted business intelligence application or Amazon Elastic MapReduce services for analysis If you have a 100TB Oracle database with 50GB of changes per day in your data center that you would like to migrate to AWS you might consider taking a full backup of the database to disk then copying the backup to USB 20 devices and shipping them Until you are ready to switch the production DBMS to AWS you take differential backups The full backup is restored by the import service and your incremental backups are transferred over the Internet and applied to the DB Instance in the cloud Once the last incremental backup is applied you can begin using the new database server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 15 of 23 Phase 4: Application Migration Phase In this phase you should ask the following question: How can I move part of or an entire system to the cloud without disrupting or interrupting my current business? In this phase you will learn two main application migration strategies: Forklift Migration Strategy and Hybrid Migration Strategy We will discuss the pros and cons of each strategy to help you decide the best approach that suits your application Based on the classification of application types (in Phase 1) you can decide which strategy to apply for what type of application Forklift Migration Strategy Stateless applications tightly coupled applications or selfcontained applications might be better served by using the forklift approach Rather than moving pieces of the system over time fork lift or “pick it all up at once” and move it to the cloud Selfcontained Web applications that can be treated as single components and backup/archival systems are examples of these types of systems that can be moved into the cloud using this strategy Components of a 3tier web application that require extremelylow latency connectivity between them to function and cannot afford internet latency might be best suited to this approach if the entire application including the web app and database servers is moved to the cloud all at once In this approach you might be able to migrate an existing application into the cloud with few code changes Most of the changes will involve copying your application binaries creating and configuring Amazon Machine Images setting up security groups and elastic IP addresses DNS switching to Amazon RDS databases This is where AWS’s raw infrastructure services (Amazon EC2 Amazon S3 Amazon RDS and Amazon VPC) really shine In this strategy the applications might not be able to take immediate advantage of the elasticity and scalability of the cloud because after all you are swapping real physical servers with EC2 instances or replacing file servers with Amazon S3 buckets or Amazon EBS volumes; logical components matter less than the physical assets However it’s important to realize that by using this approach for certain application types you are shrinking your IT infrastructure footprint (one less thing to worry about) and offloading the undifferentiated heavy lifting to AWS This enables you to focus your resources on things that actually differentiate you from your competitors You will revisit this application in the nex t stages and will be able to realize even more benefits of the cloud Like with any other migration having a backup strategy a rollback strategy and performing end toend testing is a must when using this strategy Hybrid Migration Strategy A hybrid migration consists of taking some parts of an application and moving them to the cloud while leaving other parts of the application in place The hybrid migration strategy can be a lowrisk approach to migration of applications to the cloud Rather than moving the entire application at once parts can be moved and optimized one at a time This reduces the risk of unexpected behavior after migration and is ideal for large systems that involve several applications For example if you have a website and several batch processing components (such as indexing and search) that power the website you can consider using this approach The batch processing system can be migrated to the cloud first while the website continues to stay in the traditional data center The data ingestion layer can be made “cloud awar e” so that the data is directly fed to an Amazon EC2 instance of the batch processing system before every job run After proper testing of the batch processing system you can decide to move the website application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 16 of 23 Onsite or co lo AWS cloud Notes Service Business Component or a Feature that consists of app code business logic data access layer and database Thin Layer of “cloud aware” code to be written that uses web services interface of the component consists of stubs/skeletons Keep the DB close to the component using it: If all the components use the same database it might be advisable to move all the components and the database together all at once If all the components use different database instances/schemas and are mutually exclusive but are hosted on the same physical box it might be advisable to separate the logical databases and move them along with component during migration Proxy may or may not be used Table 5: Hybrid Lowrisk Migration Strategy of Components into the cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 17 of 23 In this strategy you might have to design architect and build temporary “wrappers” to enable communication between parts residing in your traditional datacenter and those that will reside i n the cloud These wrappers can be made “cloud aware” and asynchronous (using Amazon SQS queues wherever applicable) so that they are resilient to changing internet latencies This strategy can also be used to integrate cloud applications with other cloudincompatible legacy applications (Mainframe applications or applications that require specialized hardware to function) In this case you can write “cloud aware” web service wrappers around the legacy application and expose them as web service Since web ports are accessible from outside enterprise networks the cloud applications can make a direct call to these web services and which in turn interacts with the mainframe applications You can also setup a VPN tunnel between the legacy applications that reside onpremise and cloud applications Configuring and Creating your AMIs In many cases it is best to begin with AMIs either provided by AWS or by a trusted solution provider as the basis of AMIs you intend to use going forward Depending on your specific requirements you may also need to leverage AMIs provided by other ISVs In any case the process of configuring and creating your AMIs is the same It is recommended that you create an AMI for each component designed to run in a separate Amazon EC2 instance It is also recommended to create an automated or semiautomated deployment process to reduce the time and effort for re bundling AMIs when new code is released This would be a good time to begin thinking about a process for configuration management to ensure your servers running in the cloud are included in your process Phase 5: Leverage the Cloud After you have migrated your application to the cloud run the necessary tests and confirmed that everything is working as expected it is advisable to invest time and resources to determine how to leverage additional benefits of the cloud Questions that you can ask at this stage are: Now that I have migrated existing applications what else can I do in order to leverage the elasticity and scalability benefits that the cloud promises? What do I need to do differently in order to implement elasticity i n my applications? How can I take advantage of some of the other advanced AWS features and services? How can I automate processes so it is easier to maintain and manage my applications in the cloud? What do I need to do specifically in my cloud application so that it can restore itself back to original state in an event of failure (hardware or software)? Leverage other AWS services Auto Scaling Servic e Auto Scaling enables you to set conditions for scaling up or down your Amazon EC2 usage When one of the conditions is met Auto Scaling automatically applies the action you’ve defined Examine each cluster of similar instances in your Amazon EC2 fleet and see whether you can create an Auto Scaling group and identify the criteria of scaling automatically (CPU utilization network I/O etc ) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 18 of 23 At minimum you can create an Auto Scaling group and set a condition that your Auto Scaling group will always contain a fixed number of instances Auto Scaling evaluates the health of each Amazon EC2 instance in your Auto Scaling group and automatically replaces unhealthy Amazon EC2 instances to keep the size of your Auto Scaling group constant Amazon CloudFront With just a few clicks or command line calls you can create an Amazon CloudFront distribution for any of your Amazon S3 buckets This will edge cache your static objects closer to the customer and reduce latency This is often so easy to do that customers don’t wait until this phase to take advantage of CloudFront; they do so much earlier in the plan The Migrating to CloudFront4 whitepaper gives you more information Amazon Elastic MapReduce For analyzing any large dataset or processing large amount of media one can take advantage of Amazon Elastic MapReduce Most enterprises have metrics data to process or logs to analyze or large data sets to index With Amazon Elastic MapReduce you can create repeatable job flows that can launch a Hadoop cluster process the job expand or shrink a running cluster and terminate the cluster all in few clicks Automate Elasticity Elasticity is a fundamental property of the cloud To understand elasticity and learn about how you can build architectures that supports rapid scale up and scale down refer to the Architecting for the cloud whitepaper5 Elasticity can be implemented at different levels of the application architecture Implementing elasticity might require refactoring and decomposing your application into components so that it is more scalable The more you can automate elasticity in your application the easier it will be to scale your application horizontally and therefore the benefit of running it in the cloud is increased In this phase you should try to automate elasticity After you have moved your application to AWS and ensured that it works there are 3 ways to automate elasticity at the stack level This enables you to quickly start any number of application instances when you need them and terminate them when you don’t while maintaining the application upgrade process Choose the approach that best fits your software development lifestyle 1 Maintain Inventory of AMIs It’s easiest and fastest to setup inventory of AMIs of all the different configurations but difficult to maintain as newer versions of applications might mandate updating the AMIs 2 Maintain a Golden AMI and fetch binaries on boot This is a slightly more relaxed approach where a base AMI (“Golden Image”) is used across all application types across the organization while the rest of the stack is fetched and configured during boot time 3 Maintain a JustEnoughOS AMI and a library of recipes or install scripts This approach is probably the easiest to maintain especially when you have a huge variety of application stacks to deploy In this approach you leverage the programmable infrastructure and maintain a library of install scripts that are executed ondemand 4 http://developeramazonwebservicescom/connect/entry!defaultjspa?categoryID=267&externalID=2456 5 http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 19 of 23 Figure 6: Three ways to automate elasticity while maintaining the upgrade process Harden Securi ty The cloud does not absolve you from your responsibility of securing your applications At every stage of your migration process you should implement the right security best practices Some are listed here: Safeguard your AWS credentials o Timely rotate your AWS access credentials and immediately rotate if you suspect a breach o Leverage multifactor authentication Restrict users to AWS resources o Create different users and groups with different access privileges (policies) using A WS Identity and Access Management (IAM) features to restrict and allow access to specific AWS resources o Continuously revisit and monitor IAM user polici es o Leverage the power of security groups in Amazon EC2 Protect your data by encrypting it atrest (AES) and intrans it (SSL) o Automate security policies Adopt a recovery strategy o Create periodic Amazon EBS snapshots and Amazon RDS backups o Occasionally test your backups before you need them Automate the Incloud Software Development Lifecycle and Upgrade Process In the AWS cloud there is no longer any need to place purchase orders for new hardware ahead of time or to hold unused hardware captive to support your software development lifecycle Instead developers system builders and testers can request the infrastructure they need minutes before they need it taking advantage of the vast scale and rapid response time of the cloud With a scriptable infrastructure you can completely automate your software development and deployment lifecycle You could manage your development build testing staging and production This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 20 of 23 environments by creating reusable configuration tools managing specific security groups and launching specific AMIs for each environment Automating your upgrade process in the cloud is highly recommended at this stage so that you can quickly advance to newer versions of the applications and also rollback to older versions when necessary With the cloud you don’t have to install new versions of software on old machines but instead throw away old instances and relaunch new fresh pre configured instances If upgrade fails you simply throw it away and switch to new hardware with no additional cost Create a Dashboard of your Elastic Datacenter to Manage AWS Resources It should be easy and frictionfree for the engineering and project managers to provision and relinquish AWS cloud resources At the same time the management team should also have visibility into the ways in which AWS resources are being consumed The AWS Management Console provides a view of your cloud datacenter It also provides you with basic management and monitoring capabilities (by way of Amazon CloudWatch) for your cloud resources The AWS Management Console is continually evolving It offers rich user interface to manage AWS services However if the current view does not fit your needs we advise you to consider using third party tools that you are already familiar with (like CA IBM Tivoli) or to create your own console by leveraging the Web Service APIs Using Web Service APIs It’s fairly straightforward to create a web client that consumes the web services API and create custom control panels to suit your needs For example if you have created a presales demo application environment in the cloud for your sales staff so that they can quickly launch a preconfigured application in the cloud you may want to create a dashboard that displays and monitors the activity of each sales person and each customer Manage and limit access permissions based on the role of the sales person and revoke access if the employee leaves the company There are several libraries available in our Resource Center that can help you get started with creating the dashboard that suits your specific requirement Create a Business Continuity Plan and Achieve High Availability (Leverage M ultiple Availability Zones) Many companies fall short in disaster recovery planning because the process is not fully automatic and because it is cost prohibitive to maintain a separate datacenter for disaster recovery The use of virtualization (ability to bundle AMI) and data snapshots makes the disaster recovery implementation in the cloud much less expensive and simpler than traditional disaster recovery solutions You can completely automate the entire process of launching cloud resources which can bring up an entire cloud environment within minutes When it comes to failing over to the cloud recovering from system failure due to employee error is the same as recovering from an earthquake Hence it is highly recommended that you have your business continuity plan and set your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) Your business continuity plan should include: data replication strategy (source destination frequency) of databases (Amazon EBS) data backup and retention strategy (Amazon S3 and Amazon RDS) creating AMIs with the latest patches and code updates (Amazon EC2) recovery plan to fail back to the corporate data center from the cloud postdisaster The beauty of having a business continuity strategy implemented in the cloud is that it automatically gives you higher availability across different geographic regions and Availability Zones without any major modifications in deployment and data replication strategies You can create a much higher availability environment by cloning the entire architecture and replicating it in a different Availability Zone or by simply using MultiAZ deployments (in case of Amazon RDS) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 21 of 23 Phase 6: Optimization Phase In this phase you should focus on how you can optimize your cloudbased application in order to increase cost savings Since you only pay for the resources you consume you should strive to optimize your system whenever possible In most cases you will see immediate value in the optimizations A small optimization might result in thousands of dollars of savings in your next monthly bill At this stage you can ask the following questions: How can I use some of the other AWS features and services in order to further reduce my cost? How can I improve the efficiency (and reduce waste) in my deployment footprint? How can I instrument my applications to have more visibility of my deployed applications? How can I set metrics for measuring critical application performance? Do I have the necessary cloudaware system administration tools required to manage and maintain my applications? How can I optimize my application and database to run in more elastic fashion? Understanding your Usage Patterns With the cloud you don’t have to master the art of capacity planning because you have the ability to create an automated elastic environment If you can understand monitor examine and observe your load patterns you can manage this elastic environment more effectively You can be more proactive if you understand your traffic patterns For example if your customerfacing website deployed in AWS global infrastructure does not expect any traffic from certain part of the world in certain time of the day you can scale down your infrastructure in that AWS region for that time The closer you can align your traffic to cloud resources you consume the higher the cost savings will be Terminate the UnderUtilized Instances Inspect the system logs and access logs periodically to understand the usage and lifecycle patterns of each Amazon EC2 instance Terminate your idle instances Try to see whether you can eliminate underutilized instances to increase utilization of the overall system For example examine the application that is running on an m1large instance (1X $040/hour) and see whether you can scale out and distribute the load across to two m1small instances (2 X $010/hour) instead Leverage Amazon EC2 Reserved Instances Reserved Instances give you the option to make a low onetime payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance When looking at usage patterns try to identify instances that are running in steadystate such as a database server or domain controller You may want to consider investing in Amazon EC2 Reserved Instances (3 year term) for servers running above 24% or higher utilization This can save up to 49% of the hourly rate Improve Efficiency The AWS cloud provides utilitystyle pricing You are billed only for the infrastructure that has been used You are not liable for the entire infrastructure that may be in place This adds a new dimension to cost savings You can make very measureable optimizations to your system and see the savings reflected in your next monthly bill For example if a caching layer can reduce your data requests by 80% you realize the reward right in the next bill Improving performance of the application running in the cloud might also result in overall cost savings For example if your application is transferring a lot of data between Amazon EC2 and your private data center it might make sense to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 22 of 23 compress the data before transmitting it over the wire This could result in significant cost savings in both data transfer and storage The same concept applies to storing raw data in Amazon S3 Man agement and Maintenance Advanced Monitoring and Telemetry Implement telemetry in your cloud applications so it gives you the necessary visibility you need for your missioncritical applications or services It is important to understand that enduser response time of your applications depends upon various factors not just the cloud infrastructure – ISP connectivity thirdparty services browsers and hops just to name a few Measuring and monitoring the performance of your cloud applications will give you the opportunity to proactively identify any performance issues and help you diagnose the root causes so you take appropriate actions For example if an enduser accessing the nearest node of your globally hosted application is experiencing a lower response rate perhaps you can try launching more web servers You can send yourself notifications using Amazon Simple Notifications Service (HTTP/Email/SQS) if the metric (of a given AWS resource or an application) approaches an undesired threshold Track your AWS Usage and Logs Monitor your AWS usage bill Service API usage reports Amazon S3 or Amazon CloudFront access logs periodically Maintain Security of Your Applications Ensure that application software is consistent and always up to date and that you are patching your operating systems and applications with the latest vendor security updates Patch an AMI not an instance and redeploy often; ensure that the latest AMI is deployed across all your instance s Reengineer your application To build a highly scalable application some components may need to be reengineered to run optimally in a cloud environment Some existing enterprise applications might mandate refactoring so that they can run in an elastic fashion Some questions that you can ask: Can you package and deploy your application into an AMI so it can run on a n Amazon EC2 instance? Can you run multiple instances of the application on one instance if needed? Or can you run multiple instances on multiple Amazon EC2 instances? Is it possible to design the system such that in the event of a failure it is resilient enough to automatically re launch and restart? Can you divide the application into components and run them on separate Amazo n EC2 instances? For example can you separate a complex web application into individual components or layers of Web App and DB and run them on separate instances? Can you extract stateful components and make them stateless? Can you consider application partitioning (splitting the load across many smaller machines instead of fewer larger machines)? Is it possible to isolate the components using Amazon SQS? Can you decouple code with deployment and configuration? This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Existing Applications to the AWS Cloud October 2010 Page 23 of 23 Decompose your Relational database Most traditional enterprise applications typically use a relational database system Database administrators often start with a DB schema based on the instructions from developer Enterprise developers assume unlimited scalability on fixed infrastructure s and develop the application against the schema Developers and database architects may fail to communicate with each other on what type of data is being served which makes it extremely difficult to scale that relational database As a result much time may be wasted migrating data to a “bigger box” with more storage capacity or scaling up to get more computing horsepower Moving to the cloud gives them the opportunity to analyze their current relational database management system and make it more scalable as a part of the migration Some techniques that might help take the load off of your RDBMS: Move large blob object and media files to Amazon S3 and store a pointer (S3 key) in your existing database Move associated metadata or catalogs to Amazon SimpleD B Keep only the data that is absolutely needed (joins) in the relational database Move all relational data into Amazon RDS so you have the flexibility of being able to scale your database compute and storage resources with an API call only when you need it Offload all the read load to multiple Read Replicas (Slaves) Shard (or partition) the data based on item IDs or names Implement Best Practices Implement various best practices highlighted in the Architecting for the cloud whitepaper These best practices will help you to create not only a highly scalable application conducive to the cloud but will also help you to create a more secure and elastic application Conclusion The AWS cloud brings scalability elasticity agility and reliability to the enterprise To take advantage of the benefits of the AWS cloud enterprises should adopt a phasedriven migration strategy and try to take advantage of the cloud as early as possible Whether it is a typical 3tier web application nightly batch process or complex backend processing workflow most applications can be moved to the cloud The blueprint in this paper offers a proven step by step approach to cloud migration When customers follow this blueprint and focus on creating a proof of concept they immediately see value in their proof of concept projects and see tremendous potential in the AWS cloud After you move your first application to the cloud you will get new ideas and see the value in moving more applications into the cloud Further Reading 1 Migration Scenario #1: Migrating web applications to the AWS cloud 2 Migration Scenario #2: Migrating batch processing applications to the AWS cloud 3 Migration Scenario #3: Migrating backend processing pipelines to the AWS cloud
|
General
|
consultant
|
Best Practices
|
Modernize_Your_Microsoft_Applications_on_AWS
|
ArchivedModernize Your Microsoft Applications on Amazon Web Services How to Start Your Journey March 201 6 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 2 of 14 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 3 of 14 Contents Abstract 3 Why Modernize Applications? 4 Why Run Microsoft Applications on AWS? 5 AWS for Corporate Applications 5 AWS for LoB Applications and Databases 5 AWS for Developers 5 Which Microsoft Applications Can I Run on AWS? 6 How Do I Get Started? 6 Security and Access 7 Compute: Windows Server Running on EC2 Instances 9 Databases: SQL Server Running on Amazon RDS or EC2 10 Management Services: Amazon CloudWatch AWS CloudTrail Run Command 11 Complete the Solution with the AWS Marketplace 12 Licensing Considerations 13 Conclusion 14 Abstract The cloud is now the center of most enterprise IT strategies Many enterprises find that a well planned “lift and shift” move to the cloud results in an immediate business payoff This whitepaper is intended for IT pros and business decision makers in Microsoftcentric organizations who want to take a cloudbased approach to IT and must modernize existing businesscritical applications built on Microsoft Windows Server and Microsoft SQL Server This paper covers the benefits of modernizing applications on Amazon Web Services (AWS) and how to get started on the journey ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 4 of 14 Why Modernize Applications? For m any IT organizations application modernization is a major initiative for a few major reasons: Move off legacy software To avoid the time cost and performance and reliability challenges of maintaining legacy software and unsupported versions (Windows Server 2003 SQL Server 2003 and SQL Server 2005) DevOps Initiatives To take advantage of new DevOps and application lifecycle management methodologies By moving to new application delivery platforms companies can increase the speed of innovation Mobility initiatives As users move to mobile devices the use of IT services can increase by one or more orders of magnitude This poses scalability challenges if an application is not prepared for that kind of growth New product launches New product launches can cause rapid spikes in demand for IT The underlying applications including Microsoft SQL Server and Microsoft SharePoint must be ready with the scale required to support the launch Mergers and acquisitions (M&A) activity In the case of mergers and acquisitions complexity builds up over time After multiple acquisitions a company may find itself in possession of several hundred SharePoint sites multiple Exchange instances and countless SQL Server databases Streamlining the management of disparate applications is often a huge undertaking ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 5 of 14 Why Run Microsoft Applications on AWS? In a recent survey1 International Data Corporation (IDC) reported that 50 percent of respondents were using AWS to support productivity applications like those from Microsoft Of that number 65 percent said they planned to increase their use of AWS either to move existing applications or to expand applications already running on AWS Clearly customers are already making the move to modernize their Microsoft applications AWS for Corporate Applications Customers can improve their security posture and application performance and reliability by running corporate applications built on Microsoft Windows Server in the AWS cloud For example customers can deploy a globally accessible SharePoint environment in any of the 33 AWS Availability Zones in a matter of hours To reduce complexity customers can use AWS tools that integrate with Microsoft management and access control applications like System Center and Active Directory Customers can also use AWS CloudFormation templates to perform application deployments reliably and repeatedly AWS for LOB Applications and Databases Line of business (LOB) owners are running applications in areas as diverse as oil and gas exploration retail point of sale (POS) finance health care insurance pharmaceuticals media and entertainment and more To accelerate and simplify the time to deployment customers can launch preconfigured Amazon Machine Image (AMI) templates with fully compliant Microsoft Windows Server and Microsoft SQL Server licenses included AWS for Developers Customers who develop on AWS have access to Microsoft development tools including Visual Studio PowerShell and the NET Developer Center When these tools are combin ed with scalability and agility of AWS CodeDeploy AWS Elastic 1 http://wwwidccom/getdocjsp?containerId=256654 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 6 of 14 Beanstalk (Elastic Beanstalk) and AWS OpsWorks customers can complete and deploy code on AWS much faster and with lower risk Which Microsoft Applications Can I Run on AWS? Customers have successfully deployed virtually every Microsoft application to the AWS cloud including: Microsoft Windows Server Microsoft SQL Server Microsoft Active Directory Microsoft Exchange Server Microsoft Dynamics CRM and Dynamics AX Dynamics ERP Microsoft SharePoint Server Microsoft System Center Skype for Business (formerly Microsoft Lync) Microsoft Project Server Microsoft Visual Studio Team Foundation Server Microsoft BizTalk Server Microsoft Remote Desktop Services How Do I Get Started? For enterprises the first step is to determine which of the more than 50 AWS services will be used to support their application modernization initiative The following figure shows how the typical functions of an enterprise IT organization map to AWS offerings This paper discusses some of the key services in this map and how they fit into a Microsoft application modernization initiative ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 7 of 14 Figure 1: A Conceptual Map of Enterprise IT with Amazon Web Services Security and Access We worked with AW S to develop a security model that allows us to be more secure in AWS than we can be even in our own data centers — Rob Alexander CIO Capital One With the increasing concern and focus on security most customers start here by choosing services that ensure compliance and manage risk The same security isolations found in a traditional data center are used in the AWS cloud including physical security separation of the network isolation of server hardware and isolation of storage AWS has achieved ISO 27001 certification and has been validated as a Level 1 service provider under the Payment Card Industry (PCI) Data Security Standard (DSS) AWS undergo es annual Service Organization Control (SOC) 1 audits and has been successfully evaluated at the Moderate level for federal government systems a nd Department of Defense Information Assurance Certification and Accreditation Process (DICAP) Level 2 for Department of Defense ( DOD) systems ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 8 of 14 For many enterprises considering the right set of services for security and permissions AWS virtual private networks AWS Direct Connect and AWS Directory Services are at the heart of the discussion Amazon Virtual Private Cloud (Amazon VPC) lets customers launch AWS resources into a virtual network that they've defined This virtual network closely resembles a traditional network in an onpremises data center but with the benefits of the scalable infrastructure of AWS AWS Direct Connect links the organization’s internal network to AWS over a private 1 gigabit or 10 gigabit Ethernet fiberoptic cable One end of the cable is connected to the data center router the other to an AWS Direct Connect router With this encrypted connection in place customers can create virtual interfaces directly to the AWS cloud (for example to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3)) and to Amazon VPC bypassing Internet service providers in the network path AWS Directory Service is a managed service that makes it easy to connect AWS services to existing onpremises Microsoft Active Directory (through the use of AD Connector) or to set up and operate a new directory in the AWS cloud (through the use of Simple AD and AWS Directory Service for Microsoft Active Directory) Data encryption services are provided for data in flight (through SSL) and at rest through options for both serverside and clientside encryption AWS Certificate Manager (ACM) AWS Key Management Service (AWS KMS) and AWS CloudHSM can be used together to ensure key and certificate management services are provided to securely generate store and manage cryptographic keys used for data encryption Finally AWS WAF provides web application firewall services to help protect web applications from common web exploits that could affect application availability compromise security or consume excessive resources ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 9 of 14 Compute: Windows Server Running on EC2 Instances We didn’t have time to redesign applications AWS could support our legacy 32 bit a pplications on Windows Server 2003 a variety of Microsoft SQL Server and Oracle databases and a robust Citrix environment — Jim McDonald Lead Architect Hess After a security strategy is in place it’s time to look at the infrastructure that will support the applications that will be modernized Amazon EC2 is a web service that provides resizable computing capacity that is used to build and host software systems When designing Windows applications to run on Amazon EC2 customers can plan for rapid deployment and rapid reduction of compute and storage resources based on changing needs When customers run Windows Server on an EC2 instance they don't need to provision the exact system package of hardware virtualization software and storage the way they do with Windows Server onpremises Instead they can focus on using a variety of cloud resources to improve the scalability and overall performance of the Windows applications After an Amazon EC2 instance running Windows Server is launched it behaves like a traditional server running Windows Server For example whether Windows Server is deployed onpremises or on an Amazon EC2 instance it can run web applications conduct batch processing or manage applications requiring largescale computations Customers can remote directly into Windows Server instances using Remote Desktop Protocol for easy management They can run PowerShell scripts against a single Windows Server instance or against an entire fleet using the Amazon EC2 Run Command Applications built for Amazon EC2 use the underlying computing infrastructure on an asneeded basis They draw on resources (such as storage and computing) ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 10 of 14 on demand in order to perform a job and relinquish the resources when done In addition they often terminate themselves after the job is done While in operation the application scales up and down elastically based on resource requirements Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud This enables customers to achieve more fault tolerance in applications seamlessly providing the required amount of load balancing capacity required to distribute application traffic Auto Scaling lets customers follow the demand curve for applications very closely reducing the need to manually provision capacity in advance For example customers can set a condition to add new Amazon EC2 instances to the Auto Scaling group in increments when the average utilization of the Amazon EC2 fleet is high; similarly they can set a condition to remove instances in the same increments when CPU utilization is low Databases: SQL Server Running on Amazon RDS or Amazon EC2 Amazon Relational Database Service ( Amazon RDS) allows our DBA team to focus less o n the day today maintenance and use their time to work on enhancements And Elastic Load Balancing has allowed us to move away from expensive and complicated load balancers and retain the required functionality — Chad Marino Dir ector of Technology Services Kaplan Another key building block in modernization planning is the choice of database services Customers who want to manage scale and tune SQL Server deployments in the cloud can use Amazon RDS or run SQL Server on Amazon EC2 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 11 of 14 Customers who prefer to let AWS handle the day today management of SQL Server databases choose Amazon RDS because the service makes it easy to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patching minor version upgrades failed instance replacement and backup and recovery of SQL Server databases Amazon RDS also offers automated synchronous replication acros s multiple Availability Zones (Multi AZ) for a highly available and scalable environment fully managed by AWS This allows customers to focus on higher level tasks such as schema optimization query tuning and application development and eliminate the undifferentiating work that goes into maintenance and operation of the databases Amazon RDS for SQL Server supports Windows Authentication making it easier for customers to access and manage Amazon RDS for SQL Server instances Amazon RDS for SQL Server supports Microsoft SQL Server Express Web Standard and Enterprise Editions SQL Server Express is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments SQL Server Web Edition is best for public and Internet accessible web workloads SQL Server Standard Edition is suitable for most SQL Server workloads and can be deployed in a MultiAZ mode SQL Server Enterprise Edition is the most featurerich edition of SQL Server and can also be deployed in Multi AZ mode Management Services: Amazon CloudWatch AWS CloudTrail Run Command The way CSS automated launching instance s reduced the time to launch a project by about 75 percen t What used to take fou r days now only takes one day We’re not rebuilding web and database server s from the ground up all the time We can just clone and reuse images — Nick Morgan Enterprise Architect Unilever ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 12 of 14 AWS provides a comprehensive set of management services for the enterprise: Amazon CloudWatch : Customers can use Amazon CloudWatch to monitor in real time AWS resources and applications running on AWS CloudWatch alarms send notifications or based on rules that customers define make changes automatically to the monitored resources AWS CloudTrail : With AWS CloudTrail customers can monitor their AWS deployments in the cloud by getting a history of AWS API calls made in their account including API calls made through the AWS Management Console the AWS SDKs command line tools and higherlevel AWS services Customers can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address from which the calls were made and when the calls occurred CloudTrail can be integrated into applications using the API to automate trail creation for the organization check the status of trails and control how administrators turn CloudTrail logging on and off Amazon EC2 Run Command : For automating common administrative tasks like patch management or configuration updates that apply across hundreds of virtual machines customers can use the Amazon EC2 Run Command which provides a simple method for running PowerShell scripts The Run Command is integrated with AWS Identity and Access Management (IAM) solutions to ensure administrators have access to updates for only th ose machines they own All updates are audited through AWS CloudTrail AWS addins for Microsoft System Center extend the functionality of existing System Center implementations for use with Microsoft System Center Operations Manager and Microsoft System Center Virtual Machine Manager After installation customers can use the familiar System Center interface to view and manage Amazon EC2 for Microsoft Windows Server resources in the AWS cloud as well as Windows Servers installed onpremises Complete the Solution with the AWS Marketplace Customers often have a preferred ISV for specialized software solutions for enhanced security business intelligence storage and more AWS Marketplace is an online store that makes it easy for customers to discover purchase and deploy the software and services they need to build solutions and run their businesses ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 13 of 14 With more than 2600 listings across more than 35 categories the AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements choose pricing options and automate the deployment of software and associated AWS resources with just a few clicks AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis The AWS Marketplace includes offerings from SAP Tableau NetApp Trend Micro F5 Networks and many more Customers have access to Microsoft applications such as Microsoft Windows Server Microsoft SQL Server and Microsoft SharePoint custom AMIs through Marketplace partners Licensing Considerations Customers have options for using new and existing Microsoft software licenses in the AWS cloud For new applications customers can purchase Amazon EC2 or Amazon RDS instances with a license included With this approach customers get new fully compliant Windows Server and SQL Server licenses directly from AWS Customers can use them on a “pay as you go” basis with no upfront costs or longterm investments Customers can choose from AMIs with just Microsoft Windows Server or with Windows Server and Microsoft SQL Server already installed Client access licenses (CALs) are included Customers who have already purchased Microsoft software have a “bring your own license” (BYOL) option which is allowed by Microsoft under the Microsoft License Mobility policy through Software Assurance Microsoft’s License Mobility program allows customers who already own Windows Server or Microsoft SQL Server licenses to run their deployment on Amazon EC2 and Amazon RDS This benefit is available to Microsoft Volume Licensing (VL) customers with Windows Server and SQL Server licenses (currently including Standard and Enterprise Editions) covered by Microsoft Software Assurance contracts In cases where t he customer’s license agreement requires control to the socket core or perVM level customers can use Amazon EC2 Dedicated Hosts which provide the customer with hardware that to track license consumption and compliance and report it to Microsoft or ISV s ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 14 of 14 Conclusion This paper describes the benefits of modernizing your applications on Amazon Web Services and how you can get started on the journey It shows how you can benefit from running corporate applications LOB and database applications or developing new applications using the AWS platform for your modernization initiative We recommend the AWS services that you should look to start the process of modernizing your applications on AWS
|
General
|
consultant
|
Best Practices
|
Move_Amazon_RDS_MySQL_Databases_to_Amazon_VPC_using_Amazon_EC2_ClassicLink_and_Read_Replicas
|
ArchivedMove Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas July 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Solution Overview 1 ClassicLink and EC2 Classic 2 RDS Read Replicas 2 RDS Snapshot s 2 Migration Topology 3 Migration Steps 5 Step 1: Enable ClassicLink for the Target VPC 6 Step 2: Set up a Proxy Server on an EC2 Classic Instance 6 Step 3: Use ClassicLink Between the Proxy Server and Target VPC 7 Step 4: Configure the DB Instance (EC2 Classic) 8 Step 5: Create a User on the DB Instance (EC2 Classic) 9 Step 6: Create a Temporary Read Replica (EC2 Classic) 9 Step 7: Enable Backups on the Read Replica (EC2 Classic) 10 Step 8: Stop Replication on Read Replica (EC2 Classic) 11 Step 9: Create Snapshot from the Read Replica (EC2 Classic) 12 Step 10: Share the Snapshot (Optional) 13 Step 11: Restore the Snapshot in the Target VPC 15 Step 12: Enable Backups on VPC RDS DB Instance 17 Step 13: Set up Replication Between VPC and EC2 Classic DB Instances 18 Step 14: Switch to the VPC RDS DB Instance 19 Step 15: Take a Snapshot of the VPC RDS DB Instance 20 Step 16: Change the VPC DB Instance to be ‘Privately’ Access ible (Optional) 20 Step 17: Move the VPC DB Instance into Private Subnets (Optional) 21 Alternative Approaches 22 AWS Database Migration Service (DMS) 22 ArchivedChanging the VPC Subnet for a DB Instance 23 Conclusion 24 Contributors 24 Further Reading 25 Appendix A: Set Up Proxy Server in Classic 25 ArchivedAbstract Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a rel ational database in the cloud If your Amazon Web Services ( AWS ) account was created before 2013 chances are you m ight be running Amazon RDS MySQL in an Amazon Elastic Compute Cloud ( EC2 )Classic environment and you are looking to migrate Amazon RDS into a n Amaz on EC2 Amazon Virtual Private Cloud ( VPC ) environment This whitepaper outlines the requirements and detailed steps needed to migrate Amazon RDS MySQL databases from EC2 Classic to EC2 VPC with minimal downtime using RDS MySQL Read Replicas and ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 1 Introduction There are two Amazon Elastic Compute Cloud (EC2) platforms that host Amazon Relational Database Service (RDS) database (DB) instances EC2 VPC and EC2 Classic On the EC2 Classic platform your instances run in a single flat network that you share with other customers On the EC2 VPC platform your instances run in a virtual private cloud (VPC) that’s logically isolated to your AWS account This logical network isolation closely resembles a traditional network you might op erate in your own data center plus it has the benefits of the AWS scalable infrastruc ture If you’re running RDS DB instances in an EC2 Classic environment you might be considering migrating your databases to Amazon VPC to take advantage of its features and capabilities However migrating databases across environments can involve complex backup and restore operations with longer down times that you might not be able to tolerate in your production environment This whitepaper focuses on how to use RDS read replica and snapshot capabilities to migrate a n RDS MySQL DB instance in EC2 Classic to a VPC over ClassicLink By leveraging RDS MySQL replication with ClassicLink you can migrate your databases easily and securely with minimal down time Alternative m ethods are also discussed Solution Overview This solution uses EC2 ClassicLink to enable an RDS DB instance in EC2 Classic (that is outside a VPC) to communicate to a VPC First a read replica of the DB instance in EC2 Classic is created Then a snapshot of the read replica (called the source DB instance ) is taken and used to set up a read replica in the VPC A ClassicLink proxy server enables communication between the source DB instance in EC2 Classic and the target read replica in the VPC Once the target read replica in the VPC has caught up with the source DB instance in EC2 Classic updates against the source are stopped and the target read replica is promoted At this point the connection details in any application that is reading or writing to the database are updated The source database remains fully operational during the migration minim izing downtime to applications Each of these components is explain ed in further detail as follows ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 2 ClassicLink and EC2 Classic EC2 ClassicLink allows you to connect EC2 Classic instances to a VPC within the same AWS R egion This allows you to associate VPC security groups with the EC2 Classic instance s enabling communication between EC2 Classic instances and VPC instances using private IP addresses The asso ciation between VPC security groups and the EC2 Classic instance removes the need to use public IP addresses or Elastic IP addresses to enable communic ation between these platforms ClassicLink is available to all users with accounts that support the EC2 Classic platform and can be used with any EC2 Classic instance Using ClassicLink and private IP address space for migration ensures all communication and data migration happens within the AWS network without requiring a public IP address for your DB instan ce or an Internet Gateway (IGW) to be set up for the VPC RDS Read Replicas You can create one or more read replicas of a given source RDS MySQL DB instance and serve high volume application read traffic from multiple copies of your data Amazon RDS uses the MySQL engine ’s native asynchronous replication to update the read replica whenever there is a change to the source DB instance The read replica operates as a DB instance that allows only read only connections; applications can connect to a read replic a just as they would to any DB instance Amazon RDS replicates all databases in the source DB instance Read replicas can also be promoted so that they become standalone DB instances RDS Snapshot s The ClassicLink solution relies on Amazon RDS snapshots t o initially create the target MySQL DB instance in your VPC Amazon RDS creates a storage volume snapshot of your DB instance backing up the entire DB instance and not just individual databases When you create a DB snapshot you need to identify which DB instance you are going to back up and then give your DB snapshot a name so you can restore from it later Creating this DB snapshot on a single Availability Zone ( AZ) DB instance results in a brief I/O suspension that typically lasts no more than a few m inutes Multi AZ DB instances are not ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 3 affected by this I/O suspension since the backup is taken on the standby instance Migration Topology ClassicLink allows you to link your EC2 Classic DB instance to a VPC in your account within the same Region After y ou've linked a n EC2 Classic DB instance it can communicate with instances in your VPC using their private IP addresses However instances in the VPC cannot directly access the AWS services provisioned by the EC2 Classic platform using ClassicLink So to migrate an RDS database from EC2 Classic to VPC you must set up a proxy server The proxy server uses ClassicLink to link the source DB instance in EC2 Classic to the VPC Port forwarding on the proxy server allows communication between the source DB instance in EC2 Classic and the target DB instance in the VPC This topology is illustrated in Figure 1 Figure 1: Topology for m igration in the same account If you ’re moving your RDS database to a different account you will need to set up a peering conne ction between the local VPC and the target VPC in the remote account This topology is illustrated in Figure 2 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 4 Figure 2: Topology for m igration to a different account Figure 3 illustrates how the snapshot of the DB instance is used to set up a read replica in the target VPC Figure 3: Creating a read replica snapshot and restoring in VPC A ClassicLink proxy enables communication between the source RDS DB instance in EC2 Classic and the target VPC replica as illustrated in Figure 4 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 5 Figure 4: Set ting up replication between the Classic and VPC read replica Figure 5 illustrates how updates against the source DB instance are stopped and the VPC replica is promoted to master status Figure 5: Cutting over application to the VPC RDS DB i nstance Migration Steps This section lists the steps you need to perform to migrate your RDS DB instance from EC2 Classic to VPC using ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 6 Step 1: Enable ClassicLink for the Target VPC In the Amazon VPC console from the VPC Dashboard select the VPC for which you want to enable ClassicLink select Actions in the drop down list and select Enable ClassicLink Then choose Yes Enable as shown below : Figure 6: Enabling ClassicLink Step 2 : Set up a Proxy Server on an EC2Classic Instance Install a proxy server on an EC2 Classic instance The proxy server forwards traffic to and from the RDS instance in EC2 Classic You can use an open source package such as NGINX for port forwarding For detailed information on setting up NGINX see Appendix A Set up appropriate security groups so the proxy server can communicate with the RDS instance in EC2 Classic In the following example the proxy server and the RDS instance in EC2 Classic are members of the same security group that allows traffic within the security group ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 7 Figure 7: Security group setup Step 3: Use ClassicLink Between the Proxy Server and Target VPC In the Amazon EC2 console from the EC2 Instances Dashboard select the EC2 Classic instance running the proxy server and choose ClassicLink on the Actions drop down list to create a ClassicLink connection with the target VPC Select the appropriate security group so that the proxy server can communicate with the RDS DB instance in your VPC In the example in Figure 8 SG A1 is selected Next choose Link to VPC ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 8 Figure 8: ClassicLink connection to VPC security group Step 4: Configure the DB Instance (EC2 Classic) In the Amazon RDS console from the RDS Dashboard under Parameter Groups select the parameter group associated with the RDS DB instance and use Edit Parameters to ensure the innodb_flush_log_at_trx_commit parameter is set to 1 (the default) This ensure s ACID compliance For more information see http://tinyurlcom/innodb flush logattrxcommit This step is necessary only if the value has been changed from the default of 1 Figure 9: Parameter group values on a Classic DB i nstance ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 9 Step 5: Create a U ser on the DB Instance (EC2 Classic) Connect to the RDS DB instance running in EC2 Classic via mysql client to create a user and grant permissions to replicate data Prompt> mysql h classicrdsinstance123456789012us east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> create user replicationuser identified by 'classictoVPC123'; Query OK 0 rows affected (001 sec) MySQL [(none)]> grant replication slave on ** to replicationus er; Query OK 0 rows affected (001 sec) Step 6: Create a Temporary Read Replica (EC2 Classic) Use a temporary read replica to create a snapshot and ensure that you have the correct information to set up replication on the new VPC DB instance In the Amazo n RDS console from the RDS Dashboard under Instances select the EC2 Classic DB instance and select Create Read Replica DB Instance Specify your re plication instance information Figure 10: Classic read replica instance properties ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 10 You then need to spec ify the network and security properties for the replica Figure 11: Classic read replica network and security properties Step 7: Enable Backups on the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica in EC2 Classic and use Modify DB Instances to set the Backup Retention Period to a nonzero number of days Setting this parameter to a positive number enables automated backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 11 Figure 12: Enabling b ackups Step 8: Stop Replication on R ead Replica (EC2 Classic) When you are ready to switch over c onnect to the RDS replica in EC2 Classic via a mysql client and issue the mysqlrds_stop_ replication command Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> call mysqlrds_stop_replication; + + | Message | + + | Slave is down or disabled | + + 1 row in set (102 sec) Query OK 0 rows affected (102 s ec) MySQL [(none)]> ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 12 Figure 13: Co nfirmation of replica status on the c onsole Using the following show slave status command save the replication status data in a local file You will need it later when setting up replication on the DB instance in VPC Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar p e "show slave status \G" > readreplicastatustxt Step 9 : Create Snapshot from the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica that you just stopped and use Take Snapshot to create a DB snapshot ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 13 Figure 1: Taking a Snapshot of the read r eplica Step 10: Share the S napshot (Optional) If you are migrating across account s you need to share the snapshot From the Amazon RDS console under Snapshots select the recently created read replica and use Share Snapshot to make the snapshot available across account s This step is not required if the target VPC is in same account After sharing the snapshot log in to the new account after this step is finished Figure 2: Sharing a snapshot between accounts If you are migrating to a different account you need to set up a peering connection between the local VPC and the target VPC in the remote account ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 14 You will have to allow access to the security group that you used when you enabled the ClassicLink between the proxy server and VPC Figure 16: Creating a VPC peering connection Figure 17: Enabling ClassicLink over a peering connection ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 15 Figure 18: ClassicLink settings for p eering Step 11: Restore the S napshot in the Target VPC From the Amazon RDS console under Snapshots select the Classic R ead Replica and use Restore Snapshot to restore the Read Replica snapshot You should also select MultiAZ D eployment at this time ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 16 Figure 19: Restoring s napshot in target VPC Note: We highly recommend that you enable the Multi AZ Deployment option during initial creation of the new VPC DB instance If you bypass this step and convert to Multi AZ after switching your application over to the VPC DB instance you can experience a significant performance impact especially for write intensive database w orkloads Under Networking & Security set Publicly Accessible to Yes Next select the target VPC and appropriate subnet groups to ensure connectivity from the VPC RDS DB instance to the Classic Proxy Server ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 17 Figure 20: Setti ng VPC and subnet group on V PC DB instance Figure 3: Security group settings for cross account migration Step 12: Enable Backups on VPC RDS DB Instance By default backups are not enabled on read replicas From the Amazon RDS console under Instances select the VPC RDS DB instance and use Modify DB Instances to enable backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 18 Figure 422: Setting backup r etention Step 13 : Set up Replication Between VPC and EC2 Classic DB Instance s Retrieve the log file name and log position number from information saved in the previous step Prompt> cat readreplicastatustxt | grep Master_Log_File Master_Log_File: mysql binchangelog001993 Prompt> cat readreplicastatustxt | grep Exec_Master_Log_Pos Exec_Master_Log_Pos: 120 Connect to the VPC RDS DB instance via a mysql client through the ClassicLink proxy and set the EC2 Classic RDS DB instance as the replication master by issuing the rds_start_ replication command Use the private IP address of the EC2 Classic proxy server as well as the log position from the output above MySQL [(none)]> call mysqlrds_set_external_master(' <private ip addressofproxy>3306'replicationuser''classictoVPC123' 'mysql binchangelog001993 '1200); Query OK 0 rows affected (012 sec) MySQL [(none)]> call mysqlrds_start_replication; + + | Message | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 19 + + | Slave running normally | + + 1 row in set (103 sec) Query OK 0 rows affected (103 sec) Verify the replication status on VPC Read Replica using the show slave status command MySQL [(none)]> show slave status \G; Step 14: Switch to the VPC RDS DB Instance After ensuring that the data in the VPC read replica has caught up to the EC2 Classic master c onfigure your application to stop writing data to the RDS DB instance in EC2 Classic After the replication lag has caught up c onnect to the VPC RDS DB instance via a mysql client and issue the rds_stop_ replication command MySQL [(none)]> call mysqlrds_stop_replication; At this point the VPC will stop replicating data from the master You can now promote the replica by connect ing to the VPC RDS DB instance via a mysql client and issuing the mysqlrds_rese t_external_master command MySQL [(none)]> call mysqlrds_reset_external_master; + + | Message | + + | Slave is down or disabled | + + 1 row in set (104 sec) + + | message | + + | Slave has been reset | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 20 + + 1 row in set (312 sec) Query OK 0 rows affected (312 sec) You can now change the endpoint in your application to write to the VPC RDS DB instance Step 1 5: Take a Snapshot of the VPC RDS DB Instance From the Amazon RDS console under Instances select the VPC RDS DB instance and use Take Snapshot to capture a user snapshot for recovery purposes Figure 23: Taking a snapshot of the DB instance in VPC Step 1 6: Change the V PC DB Instance to be ‘Privately’ A ccessible (Optional) After the migration to the new VPC RDS DB instance is complete you can make it be privately (not publicly) accessible From the Amazon RDS console under Instances select the DB instance and click Modify Under Network & Security set Publicly Accessible to No ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 21 Figure 24: Setting instance to not be publicly accessible Step 1 7: Move the VPC DB Instance into P rivate Subnets (Optional) You can edit the DB Subnet Group s membership for your VPC RDS DB instance to move the VPC RDS DB i nstance to a private subnet In the following example the subnets 1721620/24 and 1721630/24 are private subnets Figure 25: Configuring subnet groups To change the private IP address of the RDS DB instance in the VPC you have to perform a scale up or scale down operation For example you could choose a larger instance size After the IP address changes you can scale again to the original instance size ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 22 Figure 26: Forcing a scale optimization Note: Alternat ively you can open a n AWS support request (https://awsamazoncom/contact us/) and the RDS Operations team will move the migrated VPC RDS instance to the private subnet Alternative Approaches There are other ways to approach migrating your Amazon RDS MySQL databases from EC2 Classic to EC2 VPC We cover two alternatives here One approach is to use AWS Database Migrati on Service and another is to specify a new VPC subnet for a DB instance using the AWS Management Console AWS Database Migration Service (DMS) An alternative approach to migration is to use AWS Database Migration Service (DMS) AWS DMS can migrate your data to and from the most widely used commercial and open source databases The service supports homogenous migrations such as Amazon RDS to Amazon RDS as well as heterogeneous migrations between different database platforms such as Orac le to Amazon Aurora or Microsoft SQL Server to MySQL The source database remains fully operational during the migration minimizing downtime to applications that rely on the database ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 23 Although AWS DMS can provide comprehensive ongoing replication of data it replicates only a limited amount of data definition language (DDL) AWS DMS doesn't propagate items such as indexes users privileges stored procedures and other database changes not directly related to table data In addition AWS DMS does not auto matically leverage RDS snapshots for the initial instance creation which can increase migration time Changing the VPC Subnet for a DB Instance Amazon RDS provides a feature that allows you to easily move an RDS DB instance in EC2 Classic to a VPC You specify a new VPC subnet for an existing DB instance in the Amazon RDS console the Amazon RDS API or the AWS command line tools To specify a new subnet group in the Amazon RDS console under Network & Security Subnet Group expand the drop down list and select the subnet group that you want from the list You can choose to apply this change immediately or during the next scheduled maintenance window However there are a few limitations with this approach: The DB instance isn’t available during the move The move could take between 5 to 10 minutes Moving Multi AZ instances to a VPC is n’t currently supported Moving an instance with read replicas to a VPC isn’t currently supported ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 24 Figure 27: Specifying a new subnet group (in a VPC) for a database instance If these limitations are acceptable for your DB instances we recommend that you test this feature by restoring a snapshot of your database in EC2 Classic and then moving it to your VPC If these limitations are not acceptable then the ClassicLin k approach presented in this white paper will enable you to minimize downtime during the migration to your VPC Conclusion This paper highlights the key steps for migrating RDS MySQL instances from EC2 Classic to EC2 VPC environments using ClassicLink and RDS read replicas This approach enables minimal down time for production environments Contributors The following individuals and organizations contributed to this document: Harshal Pimpalkhute Sr Product Manager Amazon EC2 Networ king Jaime Lichauco Database Administrator Amazon RDS ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 25 Korey Knote Database Administrator Amazon RDS Brian Welcker Product Manager Amazon RDS Prahlad Rao Solutions A rchitect Amazon Web Services Further Reading For additional help please consult the following sources: http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PChtml http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PCWorkingWithRDSInstanceinaVPChtml http://docsawsamazoncom/AmazonRDS/latest/ UserGuide/CHAP_M ySQLhtml http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Net workinghtml http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpc classiclinkhtml Appendix A: Set Up Proxy Server in Classic Use an Amazon Machine Image ( AMI ) of your choice to launch an EC2 Classic instance The following example is based on the AMI Ubuntu Server 1404 LTS (HVM) Connect to the EC2 Classic instance and install NGINX: Prompt> sudo apt get update Prompt> sudo wget http://nginxorg/download/nginx 1912targz Prompt> sudo tar xvzf nginx 1912targz Prompt> cd nginx 1912 Prompt> sudo apt get install build essential Prompt> sudo apt get install libpcre3 libpcre3 dev Prompt> sudo apt get install zlib1g dev Prompt> sudo /configure withstream ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 26 Prompt> sudo make Prompt> sudo make install Edit the NGINX daemon file /etc/init/nginxconf : # /etc/init/nginxconf – Upstart file description "nginx http daemon" author “email" start on (filesystem and net deviceup IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/local/nginx/sbin/nginx env PID=/usr/local/nginx/logs/nginxpid expect fork respawn respawn limit 10 5 prestart script $DAEMON t if [ $? ne 0 ] then exit $? fi end script exec $DAEMON Edit the NGINX configuration file /usr/local/nginx/conf/nginxconf : # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 27 server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } } From the command line start NGINX : Prompt> sudo initctl reload configuration Prompt> sudo initctl list | grep nginx Prompt> sudo initctl start nginx Configure NGINX port forwarding: # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } }
|
General
|
consultant
|
Best Practices
|
NIST_Cybersecurity_Framework_CSF
|
NIST Cybersecurity Framework (CSF) Aligning to the NIST CSF in the AWS Cloud First Published January 2019 Updated October 12 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Intended audience 1 Introduction 1 Security benefits of a dopting the NIST CSF 3 NIST CSF implementation use cases 4 Healthcare 4 Financial services 5 International adoption 5 NIST CSF and AWS Best Practices 6 CSF core function: Identify 7 CSF core function: Protect 11 CSF core function: Detect 14 CSF core function: Respond 16 CSF core function: Recover 17 AWS services alignment with the CSF 19 Conclusion 20 Appendix A – Third party assessor validation 21 Contributors 22 Document revisions 22 Abstract Governments industry sectors and organizations around the world are increasingly recognizing the NIST Cybersecurity Framework (CSF) as a recommended cybersecurity baseline to help improve the cybersecurity risk management and resilience of their systems This paper evaluates the NIST CSF and the many AWS Cloud offerings public and commercial sector customers can use to align to the NIST CSF to improve your cybersecurity posture It also provides a thirdparty validated attestation confirming AWS services’ alignment with the NIST CSF risk management practices allowing you to properly protect your data across AWS Amazon Web Services NIST Cybersecurity Framework (CSF) 1 Intended audience This document is intended for cybersecurity professionals risk management officers or other organization wide decision makers considering how to implement a new or improve an existing cybersecurity framework in their organization For details on how to configure the AWS services identified in this document contact your AWS Solutions Architect Introduction The NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Cybersecurity Framework or CSF) was original ly published in February 2014 in response to Presidential Executive Order 13636 “Improving Critical Infrastructure Cybersecurity” which called for the development of a voluntary framework to help organizations improve the cybersecurity risk management and resilience of their systems NIST conferred with a broad range of partners from government industry and academia for over a year to build a consensus based set of sound guidelines and practices The Cybersecurity Enhancement Act of 2014 reinforced the legitimacy and authority of the CSF by codifying it and its voluntary adoption into law until the Presidential Executive Order on “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure” signed on May 11 2017 mandated the use of CSF for all US federal entities While intended for adoption by the critical infrastructure sector the foundational set of cybersecurity disciplines comprising th e CSF have been supported by government and industry as a recommended baseline for use by any organization regardless of its sector or size Industry is increasingly referencing the CSF as a de facto cybersecurity standard Amazon Web Services NIST Cybersecurity Framework (CSF) 2 In Feb 2018 the International Standards Organization released “ISO/IEC 27103:2018 — Information tech nology — Security techniques Cybersecurity and ISO and IEC Standards” This technical report provides guidance for implementing a cybersecurity framework leveraging existing standards In fact ISO 27103 promotes the same concepts and best practices refl ected in the NIST CSF; specifically a framework focused on security outcomes organized around five functions (Identify Protect Detect Respond Recover) and foundational activities that crosswalk to existing standards accreditations and frameworks Ado pting this approach can help organizations achieve security outcomes while benefiting from the efficiencies of re using instead of re doing Credit: Natasha Hanacek/NIST https://wwwnistgov/industry impacts/cybersecurity According to Gartner the CSF is used by approximately 30 percent of US private sector organizations and projected to reach 50 perce nt by 20201 As of the release of this report 16 US critical infrastructure sectors use the CSF and over 21 states have implemented it2 In addition to critical infrastructure and other private sector organizations other countries including Italy an d Israel are leveraging the CSF as the foundation for their national cybersecurity guidelines Since Fiscal Year 2016 US federal agency Federal Information Security Modernization Act (FISMA) metrics have been organized around the CSF and now reference it as a “standard for managing and reducing cybersecurity risks” According to the FY16 FISMA Report to Congress the Council of the Inspectors General on Integrity and Efficiency (CIGIE) aligned IG metrics with the five CSF functions to evaluate Amazon Web Services NIST Cybersecurity Framework (CSF) 3 agency p erformance and promote consistent and comparable metrics and criteria between Chief Information Officer (CIO) and Inspector General (IG) assessments The most common applications of the CSF have manifested in three distinct scenarios: • Evaluation of an orga nization’s enterprise wide cybersecurity posture and maturity by conducting an assessment against the CSF model (Current Profile) determine the desired cybersecurity posture (Target Profile) and plan and prioritize resources and efforts to achieve the Target Profile • Evaluation of current and proposed products and services to meet security objectives aligned to CSF categories and subcategories to identify capability gaps and opportunities to reduce overlap/duplicativ e capabilities for effici ency • A reference for restructuring their security teams processes and training This paper identifies the key capabilities of AWS service offerings available globally that US federal state and local agencies; global critical infrastructure owners and operators; as well as global commercial enterprises can leverage to align to the CSF (security in the cloud) It also provides support to establish the alignment of AWS Cloud services to the CSF as validated by a thirdparty asses sor (security of the cloud) based on compliance standards including FedRAMP Moderate3 and ISO 9001/27001/27017/27018 4 This means that you can have confidence that AWS services deliver on the security objectives and outcomes identified in the CSF and that you can use AWS solutions to support your own alignment with the CSF and any required compliance standard For US federal agencies in particular leveraging AWS solutions can facilitate your compliance with FISMA reporting metrics This combina tion of outcomes should empower you with confidence in the security and resiliency of your data as you migrate critical workloads to the AWS Cloud Security benefits of adopting the NIST CSF The CSF offers a simple yeteffective construct consisting of thr ee elements – Core Tiers and Profiles The Core represents a set of cybersecurity practices outcomes and technical operational and managerial security controls (referred to as Informative References) that support the five risk management functions – Identify Protect Detect Respond and Recover The Tiers characterize an organization’s aptitude and maturity for managing the CSF functions and controls and the Profiles are intended to convey the organization’s “as is” and “to be” cybersecurity postures Together these three Amazon Web Services NIST Cybersecurity Framework (CSF) 4 elements enable organizations to prioritize and address cybersecurity risks consistent with their business and mission needs It is important to note that implementation of the Core Tiers and Profiles are the responsibility of the organization adopting the CSF ( for example government agency financial institution commercial start up and so on ) This paper focuses on AWS solutions and capabilities supporting the Core that can enable you to achieve the securi ty outcomes (Subcategories) in the CSF It also describes how AWS services that have been accredited under FedRAMP Moderate and ISO 9001/27001/27017/27018 align to the CSF The Core references security controls from widely adopted internationally recognized standards such as ISO/IEC 27001 NIST 80053 Control Objectives for Information and Related Technology (COBIT) Council on Cybersecurity (CCS) Top 20 Critical Security Controls (CSC) and ANSI/ISA 62443 Standards Security for In dustrial Automation and Control Systems While this list represents some of the most widely reputed standards the CSF encourages organizations to use any controls catalogue to best meet their organizational needs The CSF was also designed to be size sector and country agnostic; therefore public and private sector organizations should have assurance in the applicability of the CSF regardless of the type of entity or nation state location The CSF encourages organizations to use any controls ca talogue to best meet their organizational needs The CSF was also designed to be size sector and country agnostic; therefore public and private sector organizations should have assurance in the applicability of the CSF regardless of the type of entity or nation state location NIST CSF implementation use cases Health care The US Department of Health and Human Services completed a mapping of the Health Insurance Portability and Accountability Act of 1996 (HIPAA)5 Security Rule to the NIST CSF Under HIPAA covered entities and business associates must comply with the HIPAA Security Rule to ensure the confidentiality integrity and availability of protected health information6 Since HIPAA does not have a set of controls that can be assessed or a formal accreditation process covered entities and business associates Amazon Web Services NIST Cybersecurity Framework (CSF) 5 like AWS are HIPAA eligible based on alignment with NIST 800 53 security controls that can be tested and verified in order to place services on the HIPAA eligibility list The mapping between the NIST CSF and the HIPAA Security Rule promotes an additional layer of security since assessments performed for certain categories of the NIST CSF may be more specific and detailed than those performed for the corresponding HIPAA Security Rule requirement Financial services The US Financial Services Sector Coordinating Council7 (FSSCC) comprised of 70 financial services associations institutions and utilities/exchanges developed a sector specific profile a customized version of th e NIST CSF that addresses unique aspects of the sector and its regulatory requirements The Financial Services Sector Specific Cybersecurity profile drafted collaboratively with regulatory agencies is a means to harmonize cybersecurity related regulator y requirements For example the FS SCC mapped the “Risk Management Strategy” category to nine different regulatory requirements and determined that the language and definitions while different largely addressed the same security objective International adoption Outside of the US many countries have leveraged the NIST CSF for commercial and public sector use Italy was one of the first international adopters of the NIST CSF and developed a national cybersecurity strategy against the five functions In June 2018 the UK aligned its Minimum Cyber Security Standard mandatory for all government departments to the five functions Additionally Israel and Japan localized the NIST CSF into their respective languages with Israel creating a cyber defense methodology based on its own adaptation of the NIST CSF Uruguay performed a mapping of the CSF to ISO standards to strengthen connections to international frameworks Switzerland Scotland Ireland and Bermuda are also among the list of countries that are using the NIST CSF to improve cybersecurity and resiliency across their public and commercial sector organizations Amazon Web Services NIST Cybersecurity Framework (CSF) 6 NIST CSF and AWS Best Practices While this paper serves as a resource to provide organizational lifecycle risk management that connects business and mission objectives to cybersecurity activities AWS also provides other best practices resources for customers moving their organizations to the cloud (AWS Cloud Adoption Framework) and customers designing building or optimizing solutions on AWS (Well Architecte d Framework)8 These resources supply complementary tools to support an organization in building and maturing their cybersecurity risk management programs processes and practices in the cloud More specifical ly this NIST CSF whitepaper can be used in parallel with either of these best practices guides serving as the foundation for your security program with Cloud Adoption Framework or WellArchitected Framework as an overlay for operationalizing the CSF secu rity outcomes in the cloud For customers migrating to the cloud the AWS Cloud Adoption Framework (AWS CAF) provides guidance that supports each unit in your organization so that each area understands how to update skills adapt existing processes and introduce new processes to take maximum advantage of the services provided by cloud computing Thousands of organizations around the world have successfully migrated their businesses to the cloud relying on the AWS CAF to guide their efforts AWS and our partners provide tools and services that can help you every step of the way to ensure complete understanding and transition https://d1awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf Amazon Web Services NIST Cybersecurity Framework (CSF) 7 → CSF core function: Identify This section addresses the six categories that comprise the “Identify” function: Asset Management Business Environment Governance Risk Assessment Risk Management Strategy and Supply Chain Risk Management that “develop an organizational understanding to manage cybersecurity risk to systems people assets data and capabilities” CSF core subcategories for identify: • Asset Management (IDAM) — The data personnel devices systems and facilities that enable the organization to achieve business purposes are identified and managed consistent with their relative importance to business objectives and the organization’s risk str ategy • Business Environment (IDBE) — The organization’s mission objectives stakeholders and activities are understood and prioritized; this information is used to inform cybersecurity roles responsibilities and risk management decisions • Governance ( IDGV) — The policies procedures and processes to manage and monitor the organization’s regulatory legal risk environmental and operational requirements are understood and inform the management of cybersecurity risk • Risk Assessment (IDRA) — The org anization understands the cybersecurity risk to organizational operations (including mission functions image or reputation) organizational assets and individuals • Risk Management Strategy (IDRM) — The organization’s priorities constraints risk tole rances and assumptions are established and used to support operational risk decisions • Supply Chain Risk Management (IDSC) — The organization’s priorities constraints risk tolerances and assumptions are established and used to support risk decisions a ssociated with managing supply chain risk The organization has established and implemented the processes to identify assess and manage supply chain risks Customer responsibility Identifying and managing IT assets is the first step in effective IT governance and security and yet has been one of the most challenging The Center for Internet Security Amazon Web Services NIST Cybersecurity Framework (CSF) 8 (CIS)9 recognized the foundational importance of asset inventory and assigned physical and logical asset inventory as controls #1 and #2 of their Top 2 0 However an accurate IT inventory both of physical assets and logical assets has been difficult to achieve and maintain for organizations of all sizes and resources Inventory solutions are limited in being able to identify and report on all IT asset s across the organization for various reasons such as network segmentation preventing the solution from “seeing” and reporting from various parts of the enterprise network endpoint software agents not being fully deployed or functional and incompatibili ty across a broad range of disparate technologies Unfortunately those assets that are “lost” or unaccounted for pose the greatest risk If they are not tracked they are most likely not receiving the most recent patches and updates are not replaced during lifecycle refreshments and malware may be allowed to exploit and maintain its hold of the asset Migrating to AWS provides two key benefits that can mitigate the challenges with maintaining asset inventories in an on prem environment Firs t AWS assumes sole responsibility for managing physical assets that comprise the AWS Cloud infrastructure This can significantly reduce the burden of physical asset management for customers for those workloads that are hosted in AWS The customer is still responsible for maintaining physical asset inventories for the equipment they keep in their environment (data centers offices deployed IoT mobile workforce and so on ) The second benefit is the ability to achieve deep visibility and asset inventory for logical assets hosted in a customer’s AWS account This may sound like a bold claim but it becomes quickly evident as it does not matter if an EC2 instance (virtual server) is turned on or off whether the endpoint agent is installed and running regardless of what network segment the asset is on or any other factor Whether using the AWS Management Console as a visual point andclick interface through the command line interface (CLI) or application programmable interface (API) customers can query and obtain visibility of AWS service assets This reduces the inventory burden on the customer to the software they install on their EC2 instances and what data assets they store in AWS AWS also has services that can perform this capab ility like Amazon Macie which can identify classify label and apply rules to data stored in Amazon Simple Storage Service (Amazon S3) An organization that unders tands its mission stakeholders and activities can utilize several AWS services to automate processes assign business risk to IT systems and manage user roles For example AWS Identity and Access Management (IAM) can be used to assign access roles based on business roles for people and services The use Amazon Web Services NIST Cybersecurity Framework (CSF) 9 of tags for services and data can be used to prioritize automated tasks and include pre determined risk decisions or stop gates for a person to evaluate the data presented and decide for which direction the system should take Governance is the “unsung hero” of cybersecurity It lays the foundation and sets the standard for people processes and technology AWS provides several services and capab ilities such as AWS IAM AWS Organizations AWS Config AWS Systems Manager AWS Service Catalog and others that customers can use to implement monitor and enforce governance Customers can leverage AWS compliance with over 50 standard s such as FedRAMP ISO and PCI DSS 10 AWS provides informati on about its risk and compliance program to enable customers to incorporate AWS controls into their governance framework This information can assist customers in documenting a complete control and governance framework with AWS included as an important part of that framework Services such as Amazon Inspector identify technical vulnerabilities that can be fed into a risk posture and management process The enhanced visibility that the cloud provides increases the accuracy of a customer’s risk posture allowing risk decisions to be made on more substantial data AWS responsibility AWS maintains stringent access control management by only providing data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or AWS All physical access to data centers by AWS employee s is routinely logged and audited Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data and server instances are logically isolated from other customers by default Privileged user access control is reviewed by an independent auditor during the AWS SOC 1 ISO 27001 PCI and FedRAMP audits AWS risk management activities include the system development lifecycle (SDLC) which incorporates industry best practices and formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment In addition the AWS control environment is subject to regular internal and ex ternal risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment Amazon Web Services NIST Cybersecurity Framework (CSF) 10 AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks In addition the AWS control environment is subject to various internal and external risk assessments AWS Compliance and Security teams have established an information security framework and policies based on the Control Objectives for Information and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v32 and the National Institute of Standards and Technology (NIST) Publication 80053 Rev 4 (Recommended Security Controls for Federal Information Systems) AWS maintains the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as alignment with the information security policy AWS Security regularly scans all internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regul arly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are no t meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements AWS maintains formal agreements with key third party suppliers and implements appropriate relationship management mechanisms in line with the ir relationship to the business The AWS third party management processes are reviewed by independent auditors as part of AWS ongoing compliance with SOC and ISO 27001 In alignment with ISO 27001 standards AWS hardware assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools AWS procurement and supply chain team maintain relationships with all AWS suppliers Refer to ISO 27001 standards; Annex A domain 8 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Amazon Web Services NIST Cybersecurity Framework (CSF) 11 CSF core function: Protect This section addresses the six categories that comprise the “Protect” function: Access Control Awareness and Training Data Security Information Protection Processes and Procedures Maintenance and Protective Technology The section also highlights AWS solutions that you can leverage to align to this function CSF Core Subcategory for Protect: • Identity Management Authentication and Access Control (PRAC) — Access to physical and logical assets and associated facilities is limited to authorized users p rocesses and devices and is managed consistent with the assessed risk of unauthorized access to authorized activities and transactions • Awareness and Training (PRAT) — The organization’s personnel and partners are provided cybersecurity awareness educat ion and are trained to perform their cybersecurity related duties and responsibilities consistent with related policies procedures and agreements • Data Security (PRDS) — Information and records (data) are managed consistent with the organization’s risk strategy to protect the confidentiality integrity and availability of information • Information Protection Processes and Procedures (PRIP) — Security policies (that address purpose scope roles responsibilities management commitment and coordination among organizational entities) processes and procedures are maintained and used to manage protection of information systems and assets • Maintenance (PRMA) — Maintenance and repairs of industrial control and information system components is performed consistent with policies and procedures • Protective Technology (PRPT) — Technical security solutions are managed to ensure the security and resilience of systems and assets consistent with related policies procedures and agreements Customer responsibility When looking at meeting the three security objectives of Confidentiality Integrity and Availability the third can be very difficult to achieve in an onpremises environment with only one or two data centers This is one of the greatest benefits of hyperscale cloud Amazon Web Services NIST Cybersecurity Framework (CSF) 12 service providers and AWS in particular due to the AWS unique infrastructure architecture You can distribute your application across multiple Availability Zones (AZs) which are logical fault isolation zones within a Region If architected properly with enhanced capacity management and automatic scaling capabilities your application and data would not be impacted by a single data center outage If you take advantage of all the Availability Zones in a Region (where there are three or more) the loss of two data centers may still not have any impact to your application Likewise services such as Amazon S3 automatically replicate your data to at least three Availability Zones in the Region for a provided availability of 9999% and data durability of 99999999999% Confidentiality can be achieved through encryption at rest and encryption in transit using AWS encryption services such as Amazon Elastic Block Store (EBS) Encryption Amazon S3 encryption Transparent Database Encryption for RDS SQL Server and RDS Oracle and VPN Gateway or encryption using your existing encryption solution AWS supports TLS/SSL encryption for all of its API endpoints and the ability to create VPN tunnels to protect data in transit AWS also provides a Key Management Service and dedicated Hardware Security Module appliances to encrypt data at rest You can choose to secure your data using th e AWS provided capabilities or use your own security tools Integrity can be facilitated in a variety of means Amazon CloudWatch and AWS CloudTrail have integrity checks customers can use digital signatures for API calls and logs MD5 checksums can be e mployed in Amazon S3 and then there are numerous thirdparty solutions from our partners AWS Config even provides integrity of the customer’s AWS environment by monitoring for changes Within the customer AWS environment AWS services such as AWS IAM A mazon Cognito AWS Single Sign On (SSO) Amazon Cloud Directory AWS Directory Service and features such as Multi Factor Authentication allows you to implement manage secure monitor and report on user identities authentication standards and access rights You are responsible for training your staff and end users on the policies and procedures for managing your environment For technical training AWS and our training partners provide comprehensive training for various roles such as Solutions Architects SysOps staff developers and security teams11 Amazon Web Services NIST Cybersecurity Framework (CSF) 13 AWS responsibility AWS employs the concept of least privilege whereby employee access is granted based on business need and job responsibilities providing temporary role based access to only those resources and data required at that moment in time AWS provides physical data center access only to approved employees All employees who need data center access must first apply for access and provide a valid business justification These re quests are granted based on the principle of least privilege where requests must specify to which layer of the data center the individual needs access and are timebound Requests are reviewed and approved by authorized personnel and access is revoked after the requested time expires Once granted admittance individuals are restricted to areas specified in their permissions Third party access is requested by approved AWS employees who must apply for third party access and provide a valid business justification These requests are granted based on the principle of least privilege wher e requests must specify to which layer of the data center the individual needs access and are time bound Thes e requests are approved by authorized personnel and access is revoked after request time expires Once granted admittance individuals are restricted to areas specified in their permissions Anyone granted visitor badge access must present identification when arriving on site and are signed in and escorted by authorized staff AWS has implemented formal documented security awareness and training policies and procedures for our employees and contractors that address purpose scope roles responsibilities management commitment coordination among organizational entities and compliance AWS FedRAMP and ISO 27001 certifications document in detail t he policies and procedures by which AWS operates maintains controls approves deploys reports and monitors all changes to its environment and infrastructure as well as how AWS provides redundancy and emergency responses for its physical infrastructure Additionally the certifications document in detail the manner in which all remote maintenance for AWS servic es is approved performed logged and reviewed so as to prevent unauthorized access They also address the manner in which AWS sa nitizes media and destroys data AWS uses products and procedures that align with NIST Special Publication 800 88 Guidelines for Media Sanitization You are also responsible for preparing the policies processes and procedures for data protection To supp ort billing and maintenance requirements AWS assets are assigned an owner tracked and monitored with AWS proprietary inventory management tools AWS asset Amazon Web Services NIST Cybersecurity Framework (CSF) 14 owner maintenance procedures are carried out by utilizing a proprietary tool with specified checks that must be completed according to the documented maintenance schedule Third party auditors test AWS asset management controls by validating that the asset owner is documented and that the condition of the assets is visually inspected according to the documented asset management policy AWS services can also greatly improve managing and performing systems maintenance for our customers First based on AWS infrastructure previously discussed with Availability Zones an application that was architected for high availability across multiple Availability Zones can allow you to segregate maintenance activities You can take assets within an Availability Zone offline for maintenance without affecting the performance of the ov erall application as the duplicate assets in the other Availability Zones scale out and pick up the load Maintenance can be accomplished one Availability Zone at a time and can be automated with stop gates and reporting as required In addition entire architectures can be shifted over from a DevTest (Blue) environment to an operations (Green) environment and vice versa wher e that method is desired CSF core function: Detect This section addresses the three categories that comprise the “Detect” function: Anomalies and Events Securit y Continuous Monitoring and Detection Processes It summarize s the key AWS solutions you can leverage to align to this function CSF core subcategory for detect: • Anomalies and Events (DEAE) — Anomalous activity is detected in a timely manner and the potential impact of events is understood • Security Continuous Monitoring (DECM) — The information system and assets are monitored at discrete intervals to identify cybersecurity events and verify the effectiveness of protective measures • Detection Processes (DEDP) — Detection processes and procedures are maintained and tested to ensure timely and adequate awareness of anomalous events Customer responsibility The ability to gather synthesize and alert on security relevant events is fundamental to any cybersecurity risk management program The API driven nature of cloud technology Amazon Web Services NIST Cybersecurity Framework (CSF) 15 provides a new level of visibility and automation not previously possible With every action taken resulting in one or more audit records AWS provides a wealth of activity information available to customers within their account structure However the volume of data can present its own challenges Finding the proverbial “needle in the haystack” is a real problem but the capacity and capabilities the cloud provides are well suited to resolve these challenges With the appropriate log processing infrastructure automation and data analysis it is possible to achieve near realtime detection and response for critical events while filtering out false positives and low/accepted risks AWS has several services that can be utilized as part of a comprehensive Security Operations strategy for nearly continuous monitoring and threat detection At the fundamental le vel there are services such as AWS CloudTrail for logging all API calls where the logs can be digitally signed and encrypted and then stored in a secure Amazon S3 bucket Virtual Private Cloud (VPC) Flow Logs monitor all network activity going in and out of your VPC There is also Amazon CloudWatch which monitors your AWS environment and generates alerts similar to a Security Information Event Management (SIEM) system and can be ingested into a customer’s on prem ises SIEM There are also other advanced services such as Amazon GuardDuty that correlate activity within your AWS environment with threat intelligence from multiple sources that provides additional risk context and anomaly detection Amazon Macie is another advanced service that can identify sensitive data classify and label it and track its location and access Some customers may even choose to take advantage of AWS artificial intelligence (AI) and machine learning (ML) services to model and analyze log data AWS responsibility AWS provides near realtime alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usag e matching the characteristics of bad actors AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities Amazon Web Services NIST Cybersecurity Framework (CSF) 16 AWS maintains the AWS Security Bulletins webpage to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of securit y announcements on the Security Bulletin webpage The AWS Support team maintains a Service Health Dashboard webpage to alert customers to any broadly impacting availability issues CSF core function: Respond This section addresses the five categories that comprise the “Respond” function: Response Planning Communications Analysis Mitigations and Improvements We also summarize the key AWS solutions that you can leverage to align to this function CSF core subcategory for respond: • Response Planning (RSRP) — Response processes and procedures are run and maintained to ensure timely response to detected cybersecurity events • Mitigation (RSMI) — Activities are performed to prevent expansion of an event mitigate its effects and eradicate the incident • Communications (RSCO) — Response activities are coordinated with internal and external stakeholders as appropriate to include external support from law enforcement agencies • Analysis (RSAN) — Analysis is conducted to ensure adequate response and support recovery activities • Improvements (RSIM) — Organizational response activities are improved by incorporating lessons learned from current and previous detection/response activities Customer responsibility The time between detection and response is critical Well run repeatabl e response plans minimize exposure and speed recovery Automation enabled by the cloud allows for the implementation of sophisticate d playbooks as code with much quicker response times By simply tagging an Amaz on Elastic Compute Cloud (Amazon EC2) instance for example automation can isolate the instance take a forensic snapshot install analysis tools connect the suspect instance to a forensic workstation and cut a ticket to a cybersecurity analyst The capabilities listed below facilitate the creation of automated Amazon Web Services NIST Cybersecurity Framework (CSF) 17 processes to add spee d and consistency to your incident response processes Moreover these tools allow you to maintain a history o f the communications for use in a postevent review While the cloud does offer capabilities to streamline and expedite the collection and dissemination of information there is always a human element involved in response coordination Cybersecurity analysis require s investigative action forensics and understanding of the incident These necessarily require some level of human interaction Though AWS services do not provide direct incident analytics they do provide services to assist with creating a formalized process and assessing the breadth of impact AWS responsibility AWS has implemented a formal documented incident response policy and program The policy addresse s purpose scope roles responsibilities and management commitment AWS utilizes a three phased approach to manage incidents: • Activation and notification phase • Recovery phase • Reconstitution phase To ensure the effectiveness of the AWS Incident Management plan AWS conducts incident response testing This testing provides excellent coverage for the discovery of previously unknown defects and fai lure modes In addition it allows the Amazon Security and Service teams to test the systems for potential customer impact and further prepare staff to handle incidents such as detection and analysis containment eradication and recovery and postincident activities The Incident Response Test Plan is run annually in conjunction with the Incident Response plan AWS Incident Management planning testing and test results are reviewed by third party auditors CSF core function: Recover This sect ion addresses the three categories that comprise the “Recover” function: Recovery Planning Improvements and Communications It also summarize s the key AWS solutions that you can leverage to align to this function Amazon Web Services NIST Cybersecurity Framework (CSF) 18 Customer responsibility Customers are responsible for planning testing and performing recovery operations for their applications and data to maintain their business continuity The cause of an outage may come from many different sources AWS services provide many advanced capab ilities for self healing and automated recovery For example the use of Auto Scaling groups across multiple Availability Zones allows for the infrastructure to monitor the health of EC2 instances and rapidly replace a failed instance with a new Amazon Machine Image (AMI) Additionally the use of Amazon CloudWatch AWS Lambda and other services/service capabilities can automate recovery actions to include everything from deploying an entire AWS environment and application to failing over to a different AWS Region restoring data from backups and more Lastly actions involving public relations reputation management and communicating recovery activities are respective to how the organization handles the event that impacted their environment which in this case is the customer AWS responsibility The AWS resilient infrastructure reliable automation disciplined processes and exceptional people are able to recover from events very quickly and with minimal (if any) disruption to customers The AWS business continuity plan details the three phased approach that AWS has developed to recover and reconstitute the AWS infrastructure: • Activation and notification phase • Recovery phase • Reconstitution phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions AWS maintains a ubiquitous security control environment across all Regions Each data center is built to physical environmental and security standards in an active active configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Compo nents (N) have at least one independen t backup component (+1) so the backup component is active in the operation even if all other Amazon Web Services NIST Cybersecurity Framework (CSF) 19 components are fully functional To reduce single points of failure this model is applied throughout AWS including network and data center implementation All data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS services alignment with the CSF AWS assessed the alignment of our cloud services to the CSF to demonstrate “security of the cloud” In an increasingl y interconnected world applying strong cybersecurity risk management practices for each interconnected system to protect the confidentiality integrity and availability of data is a necessity AWS public and private sector customers fully expec t that AWS employ s bestinclass security to safeguard its cloud services and the data processed and stored in those systems To effectively protect data and systems at hyperscale security cannot be an afterthought but rather an integral part of AWS systems lifecycle management This means that securit y starts at Phase 0 (systems inception) and is continuously delivered as an inherent part of the AWS service delivery model AWS exercises a rigorous risk based approach to the security of our services and the safeguarding of customer data It enforce s its own internal security assurance process for our services which evaluates the effectiveness of the managerial technical and operational controls necessary for protecting against current and emerging security threats impacting the resiliency of our services Hyper scale commercial cloud service providers such as AWS are already subject to robust security requirements in the form of sector specific national and international security certifications (for example FedRAMP ISO 27001 PCI DSS SOC and so on ) that sufficiently address the risk concerns identified by public and private sector customers worldwide AWS adopts the security high bar across all of its services based on its “high watermark” approach for its customers This means that AWS takes the highest classification level of data traversing and stored in its cloud services and apply those same levels of protection to all of its services and for all of its customers These services are then queued for certification against the highest compliance bar which translates to customers benefiting from elevated levels of protection for customer data processed and stored in the AWS Cloud Amazon Web Services NIST Cybersecurity Framework (CSF) 20 As validated by our third party assessor AWS solutions available today for our public and commercial sector customers align with the CSF Core Each of these services maintains a current accreditat ion under FedRAMP Moderate and/or ISO 27001 When deploying AWS solutions organizations can have the assurance that AWS services uphold ris k management best practices defined in the CSF and can leverage these solutions for their own alignment to the CSF Refer to Appendix A for the third party attestation letter As validated by a thirdparty assessor AWS solutions available today for its public and commercial sector customers align with the NIST CSF Each of these services maintains a current accreditation under FedRAMP Moderate and/or ISO 27001 When deploying AWS solutions organizations can have the assurance that AWS services uphold risk management best practices defined in the CSF and can leverage these solutio ns for their own alignment to the CSF Conclusion Public and private sector entities acknowledge the security value in adopting the NIST CSF into their environments US federal agencies in particular are directed to align their cybersecurity risk management and reporting practices to the CSF As US state and local governments non US governments critical infrastructure operators and commercial organizations assess their own alignment with the CSF they need the right tools and solutions to achieve a secure and compliant system and organizational risk posture You can strengthen your cybersecurity posture by leveraging AWS as part of your enterprise technology to build automated innovative and secure solutions to achieve the security outcomes in the CSF You reap an additiona l layer of security with the assurance that AWS services also employ sound risk management practices identified in the CSF which have been validated by a thirdparty assessor Amazon Web Services NIST Cybersecurity Framework (CSF) 21 Appendix A – Third party assessor validation Amazon Web Services NIST Cybersecurity Framework (CSF) 22 Contributors Contributors to this document include : • Min Hyun Sr Manager Security/Compliance/Privacy • Michael South Principal Industry Specialist ADFS DC Tech • James Mueller AWS Security Assurance : Gov FedRAMP AWS Security Document revisions Date Description October 12 2021 Updated January 2019 First publication Notes 1 https://wwwnistgov/industry impacts/cybersecurity 2 Ibid 3 Federal Risk and Authorization Management Program (FedRAMP) is the US government’s standardized federal wide program for t he security authorization of cloud services FedRAMP’s “do once use many times” approach was designed to offer significant benefits such as increasing consistency and reliability in the evaluation of security controls reducing costs for service provider s and agency customers and streamlining duplicative authorization assessments across agencies acquiring the same service 4 ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approac h to managing company and customer information that’s based on periodic risk assessments appropriate to ever changing threat scenarios ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public Amazon Web Services NIST Cybersecurity Framework (CSF) 23 cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set 5 HIPAA includes provisions to protect the security and privacy of protected health information (PHI) PHI includes a very wide set of personally identifiable health and health related data including insurance and billing information diagnosis data clini cal care data and lab results such as images and test results The HIPAA rules apply to covered entities which include hospitals medical services providers employer sponsored health plans research facilities and insurance companies that deal directly with patients and patient data The HIPAA requirement to protect PHI also extends to business associates 6 PHI includes a very wide set of personally identifiable health and health related data including insurance and billing information diagnosis data clinical care data and lab results such as images and test results 7 https://wwwfssccorg/About FSSCC 8 The AWS Well Architected Framework documents architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cloud It provides a set of foundational questions that allow you to understand if a speci fic architecture aligns well with cloud best practices https://docsawsamazoncom/wellarchitected/latest/framework/welcomehtml 9 https://wwwcisecurityorg/controls/ 10 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council https://wwwpcisecuritystandardsorg/ which was founded by American Express Discover Financial Services JCB International MasterCard Worldwide and Visa Inc PCI DSS applies to all entities that store process or tran smit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers 11 Available online and classroom training can be found at https://awsamazoncom/training There are also several books covering many aspects of AWS which can be found at https://wwwamazoncom by searching for “AWS” AWS whitepapers can be found at https://awsamazoncom/whitepapers
|
General
|
consultant
|
Best Practices
|
Optimizing_ASP.NET_with_C_AMP_on_the_GPU
|
ArchivedOptimizing ASPNET with C++ AMP on the GPU High Performance Parallel Code in the AWS Cloud Scott Zimmerman April 2015 This paper has been archived i r A r AForthe latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 2 of 42 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purp oses only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Licensed under the Apache License Version 20 (the "License") You may not use this file except in compliance with the License A copy of the License is located at http://awsamazoncom/apache20/ or in the "license" file accompanying this file This code is distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License Portions of the code were developed by Heaton Research and are licensed under the Apache License Version 20 available here: https://wwwapacheorg/licenses/LICENSE20html Portions of the code were developed by Microsoft Corporation and are licensed under Microsoft MSPL available here: http://opensourceorg/licenses/ms pl ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 3 of 42 Contents Abstract 4 Introduction 4 Introduction to C++ AMP 6 Introduction to Amazon EC2 7 Install the AWS Toolkit for Visual Studio 7 Set up the Amazon EC2 Windows Server Instance with NVIDIA GPU 7 Create a Security Group with the AWS Toolkit 8 Launch G2 Instance in Amazon EC2 with the AWS Toolkit 11 Connect to the Instance to Install the NVIDIA Driver and Visual C++ Redistributable 14 Comparing the Performance of Various Matrix Multiplication Algorithms 20 Working with the Code 21 Deploying the Web Application with AWS Elastic Beanstalk 21 Using ebextensions with AWS Elastic Beanstalk 24 Model Code for Data Passed Between Controller and View 25 Accessing the Model in the View 25 Controller Code to Invoke Each Algorithm and Populate the Model 26 C# Basic Serial (CPU) 31 C# Optimized Serial (CPU) 32 C# Parallel with TPL (CPU) 33 C++ Basic Serial (CPU) 33 C++ Parallel with PPL (CPU) 36 C++ Parallel with AMP (GPU) 37 C++ Parallel with AMP Tiling (GPU) 39 Conclusion 40 Further Reading 41 Notes 41 ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 4 of 42 Abstract This whitepaper is intended for Microsoft Windows developers who are considering writing high performance parallel code in Amazon Web Services (AWS) using the Microsoft C++ Accelerated Massive Parallelism (C++ AMP) library This paper describes an ASPNET ModelViewController ( MVC ) web application written in C# that invokes C++ functions running on the graphics processing unit (GPU) for matrix multiplication Since matrix multiplication is of order Ncubed multiplying two 1024 x 1024 matrixes requires over one billion multiplications and is therefore an example of a computeintensive operation that would be a good candidate for GPU programming This paper shows how to use AWS Elastic Beanstalk and the AWS Toolkit for Visual Studio to launch a Microsoft Windows Server instance with an NVIDIA GPU in the Amazon Elastic Compute Cloud (Amazon EC2) on AWS Introduction Certain types of parallel algorithms can run hundreds of times faster on a GPU than similar serial algorithms on a CPU This paper describes matrix multiplication as one example of a parallel algorithm that is suitable for GPU programming Performance increases of this order are obviously very attractive for certain workloads but there are several technologies that must be understood and integrated in order to achieve these gains First you’ll need a GPU programming language or library The next section briefly discuss es the advantages of Microsoft C++ AMP and this whitepaper includes working code examples written in C++ AMP Second this paper will describe how to use the AWS Toolkit for Visual Studio to launch Amazon EC2 instances with a GPU connect to them remotely and install the NVIDIA GPU graphics driver Third although the focus here is on C++ programming we’ll need a simple user interface to display results and it’s typically easier to do this in C# than in C++ So this whitepaper shows a small program written in C# that uses ASPNET MVC to invoke a function written in C++ AMP Fourth bringing ASPNET MVC into the solution means you also need to add the Internet Information Services (IIS) role to Windows Server and deploy the web application This will be accomplished from inside Visual Studio with the AWS Elastic Beanstalk service Of course it’s not necessary to develop a web front end ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 5 of 42 or use C# to take advantage of C++ AMP but that is a common use case so this whitepaper covers how to integrate those technologies with C++ and Windows Server running on Amazon EC2 Figure 1 shows how the ASPNET MVC architecture spans the physical tiers in this application and the coding technologies that will be used on each tier Note that this simple application doesn’t include a data tier Also the application tier is only a logical concept in this scenario It is a way of looking at the C# and C++ algorithms as distinct from the web application even though they run on the CPU or GPU of the same web server virtual machine Figure 1: The ASPNET MVC Architecture and Languages Used This application starts with a basic matrix multiplication function in C# to show the simplest way to implement the solution Then the program is optimized six times each time adding a technology and comparing performance Subsequent sections of this paper will describe how each variation is coded and how to set up the technologies Download the source code and Visual Studio solution 1 Here’s an overview of the seven matrix multiplication algorithms that will be illustrated: Algorithm Description C# Basic Serial (CPU) Written in C# to serve as a performance baseline on which we hope to improve by using C++ C# Improved Serial (CPU) Optimizes the order of loop indexes to improve performan ce ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 6 of 42 Algorithm Description C# Parallel with TPL (CPU) Uses the NET Framework Task Parallel Library (TPL) When run on a machine with multiple cores this multithreaded algorithm improves performance when compared with the serial C# version C++ Basic Serial (CPU) Converts the basic serial algorithm to C++ to demonstrate how to invoke C++ code from ASPNET MVC C# code running on IIS C++ Parallel with PPL (CPU) Rewrites the serial C++ function to make it parallel by using the Microsoft Parallel Patterns Library (PPL) C++ Par allel with AMP (GPU) Rewrites the parallel C++ function to run on the GPU using basic techniques with C++ AMP C++ Parallel with AMP Tiling (GPU) Rewrites the AMP C++ function to use AMP with tiling Implementing tiling algorithms takes a bit more work th an basic AMP but if done carefully it can improve performance The performance comparisons illustrated in this application are not meant to be scientific benchmarks but they may provide useful insight into the potential relative performance of the various techniques The algorithms are not intended to be optimal If you really need to do fast matrix multiplications you should look into tested and optimized libraries such as Basic Linear Algebra Subprograms (BLAS) or Linear Algebra Package (LAPACK ) Introduction to C++ AMP Until now programming the GPU has been tedious or nonportable or limited to the C language Microsoft C++ AMP enables Visual C++ developers to optimize computeintensive programs in a highly productive way AMP is an open specification for an extension to standard C++ that greatly simplifies porting parallel algorithms from the CPU to the GPU AMP is also elegant and takes advantage of modern C++ features such as lambdas You’ll see that after taking the first step with AMP parallel code still looks similar to the original serial code The popular OpenCL library is portable across multiple operating systems and GPU hardware vendors It’s been around longer than C++ AMP and is recognized for providing very fast runtime performance However OpenCL is a Clanguage library that misses out on modern C++ features AMP is portable across GPU hardware but because it’s designed for DirectX it runs on Windows In 2012 Intel released a free download called Shevlin Park as a proof ofconcept that enables C++ AMP code to run on top of OpenCL which means your C++ AMP code can run on Linux and other operating systems ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 7 of 42 In 2013 the HSA Foundation published an opensource C++ AMP compiler2 that outputs OpenCL code This also enables you to write C++ AMP code to run on Linux and other operating systems Microsoft maintains a C++ AMP Algorithms Library modeled after the Standard Template Library3 and a few dozen C++ AMP sample projects on the AMP blog 4 Introduction to Amazon EC2 Amazon EC2 is a service that allows customers to run Windows Server and Linux in the AWS cloud Amazon EC2 provides over 30 types of compute instances 5 including memoryoptimized storageoptimized and GPUenabled instances The G2 double extralarge ( g22xlarge ) instance type has eight virtual CPUs and an NVIDIA GPU with 1536 CUDA cores and 4 GB of video memory CUDA is a parallel computing platform and programming model invented by NVIDIA 6 Install the AWS Toolkit for Visual Studio This paper assumes that you have Visual Studio Professional 2013 or Visual Studio Community 2013 already installed on your computer It is possible to write the code with Visual Studio Express; however that edition doesn’t support plugins such as the AWS Toolkit for Visual Studio The AWS Toolkit makes it very convenient to perform several account management tasks without ever leaving Visual Studio You’ll use the AWS Toolkit extensively to launch and administer an Amazon EC2 instance in AWS although it’s also possible to do that with the Amazon EC2 console in a web browser Please download and install the AWS Toolkit for Visual Studio7 from the AWS website For this whitepaper please ensure you have at least version 1810 of the AWS Toolkit for Visual Studio After installing the toolkit you should see an option for the AWS Explorer appear in the Visual Studio View menu Set up the Amazon EC2 Windows Server Instance with NVIDIA GPU This paper assumes that you have an AWS account with permission to launch Amazon EC2 instances AWS provides a limited free tier8 for one year for new customers to experiment with cloud computing The free tier covers several ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 8 of 42 services including Amazon EC2 However it applies to the T2micro instance type not the G2 instance type Important Please be aware that there is a cost to run the G2 instance type used in this paper This profile in the AWS Simple Monthly Calculator shows the estimated cost to run one on demand G2 instance with Windows Server nonstop for a month Note that significant cost savings can be achieved by using spot or reserved instances rather than ondemand instances and by stopping the instance when it’s not in use The following sections explain how to use the AWS Toolkit for Visual Studio to launch a G2 instance with Windows Server Create a Security Group with the AWS Toolkit Microsoft Remote Desktop Connection (RDC) is useful for manually administering Windows Server remotely but the NVIDIA display driver that you need for the GPU and the Remote Desktop Protocol (RDP) used by RDC are not compatible RealVNC offers a free version of their VNC Server software that enables remote connections graphically and it uses a different protocol that is compatible with the NVIDIA driver So before you install the NVIDIA driver you will need to install VNC Server on the instance Then you can disconnect from RDP reconnect over VNC and install the NVIDIA driver Don’t worry about installing that now; the detailed instructions are provided later RDP uses port 3389 VNC Server uses port 5900 And of course the web application will use port 80 The default security group when launching a Windows Server instance only opens port 3389 You could simply add rules to the default group after you launch the instance but instead you’ll create your own custom security group and give it a name You’ll also use this custom security group later when you deploy the web application with AWS Elastic Beanstalk To create a security group in the AWS Toolkit: 1 In Visual Studio on the View menu click AWS Explorer (or press Ctrl+K A) 2 Expand Amazon EC2 and doubleclick Security Groups Your security groups are displayed in the right pane On the menu bar above that pane ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 9 of 42 click Create Security Group Fill in the Name and Description and leave the No VPC option selected as shown in Figure 2 Click OK Figure 2: Creating a Security Group 3 Step 2 creates an empty security group Now let’s add the rules to it I n the lower pane click Add Permission to open the Add IP Permission dialog box as shown in Figure 3 Leave Protocol as TCP For Port Range type 5900 for both the Start and End fields Click OK Caution For RDP and VNC it’s highly advisable to limit the Source CIDR to your local IP address with either /32 or an appropriate subnet of your private network appended to the address You may use the estimated IP address shown in the Add IP Permission dialog box (Figure 3) or you can type “what is my IP” into a search engine to see your public IP address AWS creates a default RDP rule with Source CIDR as 0000/0 (which means the whole Internet) to simplify the experience for new users who are launching an instance But opening VNC and RDP ports to the w hole Internet means that hackers can try to guess your administrator password to gain control of your server ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 10 of 42 Figure 3: Adding a Rule in the Security Group 4 Repeat step 3 to add port 3389 (for Protocol you can select RDP ) 5 Repeat step 3 once more to add port 80 (for Protocol you can select HTTP ) With your security group selected in the top pane your rules should appear in the middle pane similar to Figure 4 Figure 4: You should Have Three Rules in Your Security Group Note This security group wi ll serve you while you are installing software on the Amazon EC2 instance After you complete that task and create an Amazon Machine Image (AMI) AWS Elastic Beanstalk will apply an automatic security group with only ports 22 and 80 open So if you need to manually administer your Amazon EC2 instance after deploying with AWS Elastic Beanstalk you must add port 5900 to that security group ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 11 of 42 Launch G2 Instance in Amazon EC2 with the AWS Toolkit Now that you have a custom security group you’re ready to launch a G2 instance: 1 In Visual Studio on the View menu click AWS Explorer (or press Ctrl+K A) AWS Explorer appears as in Figure 5 where it’s shown with the Amazon EC2 service expanded Figure 5: AWS Explorer in AWS Toolkit 2 In AWS Explorer expand Amazon EC2 as shown in Figure 5 Rightclick Instance and then click New Instance 3 In the Quick Launch wizard click Advanced AWS has created special AMIs to optimize the deployment time for IIS and the NET Framework with AWS Elastic Beanstalk The wizard lets you pick one of those AMIs as your base image After you get your instance prepared with the NVIDIA drivers you’ll save your own AMI 4 In the Launch new Amazon EC2 Instance dialog box (see Figure 6) type net beanstalk in the search text box (the third Viewing field) Then change the setting of the first field from Owned by me to Amazon Images Do it in that order; otherwise it takes longer Click the Name ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 12 of 42 column heading to sort the AMIs by name Expand the Description column so you can see the dates the images were created Scroll down to select the most recently created Windows Server 2012 R2 (not core) image At the time this screenshot was taken the latest version of the Beanstalk Container was v2026 However new images are released from time to time to incorporate the latest Windows updates from Microsoft so you’lll likely see a newer version Now click Next Figure 6: Choosing an AMI 5 In the AMI Options dialog box in the Instance Type list select GPU Double Extra Large Click Next 6 In the Storage dialog box click Next 7 In the Tags dialog box provide a name for the instance so it’s easy to distinguish it 8 In the Security dialog box (Figure 7) click Create New Key Pair and give it a name Choose the security group you created earlier (this is very important) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 13 of 42 Figure 7: Choosing the Security Group You Created Earlier 9 Click Launch 10 In the AWS Explorer left pane under Amazon EC2 doubleclick Instances That will display the panel of your instances and you should see that your new instanc e is launching The status will show as “pending” for a few minutes and then it will change to “running” You can continue to the next step while the launch is pending 11 You’ll need an Elastic IP address for this instance so you can easily reconnect to it if you stop and restart the instance Rightclick the instance (even if the status is pending) and then click Associate Elastic IP In the Attach Elastic IP to Instance dialog box (Figure 8) click Create new Elastic IP and then click OK Figure 8: Creating a New Elastic IP ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 14 of 42 Note Remember there’s an hourly cost for the instance while it’s running so it’s a good idea to stop (not terminate) the instance and restart it if you’re not able to finish all the steps in this whitepaper in one session Connect to the Instance to Install the NVIDIA Driver and Visual C++ Redistributable In this section you’ll download and install VNC Server on the instance using Microsoft Internet Explorer But before you can do that you’ll need to turn off the Internet download protection feature that is enabled by default in Internet Explorer 11 on Windows Server 2012 R2 While you’re on the instance you’ll also download and install the Visual C++ 2013 redistributable package Doing this manually is simpler than creating a setup program with a merge module The reason you’ll do this now is so you can create a fully prepared AMI of the instance that you can use later to deploy your web application with AWS Elastic Beanstalk For some of the steps in this section you’ll use the AWS To olkit on your local workstation; for others you’ll use the Amazon EC2 instance connected through RDC or VNC The transitions will be mentioned as needed After the status of your instance changes from “pending” to “running” follow these steps i n the AWS Toolkit: 1 The AWS Toolkit has a convenient option to log in directly with the key pair we created previously without requiring you to enter the administrator password This works until you change the password on the instance which you’ll need to do to connect with VNC Rightclick the instance in the AWS Toolkit and then click Open Remote Desktop In the Open Remote Desktop dialog box (Figure 9) leave the Use EC2 keypair to log on option selected and then click OK The toolkit automatically decrypts the AWSgenerated password from the key pair passes it to Microsoft RDC launches RDC and then logs you into the Amazon EC2 instance ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 15 of 42 Figure 9: Open Remote Desktop For steps 2 7 you’ll use RDC connected to the instance During steps 2 12 if you get a popup message indicating that Windows has updates to install on the instance you should go ahead and apply those so they’ll be included in the AMI you’ll create in step 14 If Windows Update requires a reboot restart your machine and then resume th ese instructions after reconnecting through RDC (or VNC Viewer) 2 You must first change the Windows administrator password on the instance to a password you can remember In Windows Server 2012 R2 click the Windows icon (Start button) in the lowerleft corner of your screen to get to the Start menu Click Administrative Tools Double click Computer Management Expand Local Users and Groups Click once on Users Rightclick Administrator and then click Set Password Click Proceed Enter the new password and then click OK Now the AWSgenerated password is obsolete Close Computer Management 3 To enable file downloads in Internet Explorer click the Windows Start button again Click Server Manager In the left pane click Local Server You should see that Internet Explorer enhanced security configuration is turned on by default Click to turn it off for administrators and then click OK Close Server Manager 4 To run Visual C++ code you’ll need to install the Visual C++ 2013 redistributable from Microsoft It includes the C++ runtime and the AMP DLL file Click the Windows Start button again Click Internet Explorer Browse to the Microsoft download page for Visual C++ Redistributable ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 16 of 42 Packages for Visual Studio 2013 9 Click Download and choose the file vcredist_x64exe from the list of downloads Run the program after downloading it 5 Open Internet Explorer Browse to the RealVNC website 10 Download VNC Server for Windows The free version is adequate for this whitepaper but you will need to register with RealVNC to get a license Install VNC Server (you don’t need to install the Printer Driver or VNC Viewer) 6 On the Windows Start menu click All Programs to display all installed applications Under VNC click Enter VNC Server License Key Go through the VNC wizard to license your server software 7 Now you can close RDC but leave the instance running Now that you will no longer be using RDP with the instance we recommend that you delete the security group rule that permits RDP traffic to the instance You still need to leave port 5900 open for VNC 8 Install and launch the VNC Viewer program on your local workstation11 It prompts you for the VNC Server public IP address To retrieve the IP address rightclick your instance in AWS Explorer and then click Properties In the Properties dialog box (Figure 10) rightclick the Elastic IP value and then click Copy Paste the address into the VNC Server address box in VNC Viewer Figure 10: Getting the Elastic IP Address from the Instance Properties ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 17 of 42 9 When you connect to the instance in VNC Viewer it will prompt you to press Ctrl+Alt+Delete to log in Ordinarily that keystroke sequence is captured by your local workstation The trick is to slide your mouse toward the top center of the VNC Viewer window That will drop down the toolbar where you can click the Ctrl+Alt+Delete button to transmit the keystroke to the remote machine VNC Viewer shows the remote machine prompting you for your Windows administrator password Type the password that you set in Windows when you logged in previously with RDC Do steps 10 12 on the instance while connected through VNC 10 Open Internet Explorer to download the NVIDIA graphics driver As of this writing the latest version on the NVIDIA support site is NVIDIA GRID K520/K340 Release 33412 (Figure 11) Although the page title says 334 the version is 335 Regardless you should be fine if you get the latest version When the NVIDIA installation completes it prompts you to reboot You can save time if you complete the next few steps first Figure 11: Installing the NVIDIA Graphics Driver 11 Don’t reboot after installing the NVIDIA graphics driver Instead on the Windows Start screen type ec2 and click to run the EC2Config service To make the image compatible with AWS Elastic Beanstalk select the User Data box on the General tab (Figure 12) and choose Random for the Administrator Password on the Image tab (Figure 13) Click Apply and then click OK ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 18 of 42 Figure 12: Checking User Data in EC2Config Figure 13: Checking Random Password in EC2Config 12 Click the Windows Start button Click Administrative Tools Double click Computer Management Click Device Manager Under display adapters you should see both the NVIDIA driver and the Microsoft Basic Display Adapter as shown in Figure 14 Rightclick Microsoft Basic Display Adapter and then click Disable ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 19 of 42 Figure 14: Disabling the Microsoft Basic Display Adapter in Device Manager 13 Now on your local workstation in AWS Explorer expand Amazon EC2 Instances Rightclick your GPU instance and choose Stop (do not choose Terminate ) This will automatically disconnect your VNC session Later you’ll use AWS Elastic Beanstalk to start a new instance when you deploy the code 14 After the instance status changes from stopping to stopped rightclick your GPU instance again in AWS Explorer and then click Create Image (EBS AMI) Give the image a name and description and then let it run in the background There is a small storage charge for the images you save in AWS but it’s convenient to be able to reuse the images with everything pre installed if you decide to terminate the instance Whenever you make configuration changes or apply Windows Update on your instance in the future you should create a new image and then optionally deregister your older images 15 After the image is created look in AWS Explorer under Amazon EC2 AMIs and jot down the AMI ID of the image you just created The ID is casesensitive Now that you have your own AMI you’re ready to switch hats and start working with the code in Visual Studio ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 20 of 42 Comparing the Performance of Various Matrix Multiplication Algorithms Before you deploy the code with AWS Elastic Beanstalk here’s a screenshot of the web application after it completes running The user interface is simple: it consists of an HTML table listing the timing and relative performance (versus the baseline) of each algorithm as shown in Figure 15 Figure 15: The ASPNET MVC Application Displaying the Results You’ll notice in the UI that the matrix size used is 1024 x 1024 There are 1536 CUDA cores on the NVIDIA GPU instance type in Amazon EC2 Because the outer loop of the algorithm will execute in parallel once for each row of the matrix 1024 was selected as the matrix size to take advantage of a large number of the CUDA cores Also note that the matrix size must be a multiple of the tile size used in the AMP tiling algorithm You may also notice a couple of curiosities in the relative performance of the algorithms First the performance of the basic C++ algorithm is almost identical to th e performance of the basic C# algorithm That’s interesting because many ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 21 of 42 developers suspect that C++ is about twice as fast as C# A possible explanation for this might be that the C# code is using ragged arrays which is a known optimization for the NET Framework Another curiosity is that the parallel C# code that uses TPL is about seven times faster than the serial C# code but the parallel C++ code that uses PPL is only about four times faster than the serial C++ code Since there are eight virtual cores on the instance we might expect a parallel algorithm to be about seven times faster There are ways to get more out of PPL but that’s outside the scope of this paper Working with the Code If you haven’t downloaded the Visual Studio solution and sour ce code for this whitepaper yet you should download it now 13 Open the CSharpMatrixMultiply solution in Visual Studio The solution includes two projects The ASPNET MVC project is adapted from the basic project that was created with the Visual Studio New Project wizard The following sections explain the C# code and C++ code in the projects The C# project has a dependency on the C++ DLL Deploying the Web Application with AWS Elastic Beanstalk To deploy the application by using the image and security group you created earlier: 1 (Recommended) Switch the build configuration in Visual Studio from Debug to Release 2 In Solut ion Explorer rightclick the CSharpMatrixMultiply project (not the CSharpMatrixMultiply solution ) and then click Publish to AWS 3 Click Next to accept the defaults in the first screen 4 In the Application Environment dialog box you must provide an environment name but the default name for this project is too long so just shorten it until the red border disappears from the text box (Figure 16) Click Next ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 22 of 42 Figure 16: Specifying the Application Environment Details 5 In the Amazon EC2 Launch Configuration screen (Figure 17) verify that Windows Server 2012 R2 is selected For the instance type select GPU Double Extra Large Select your key pair Finally you must provide the AMI ID of the image you created previously You can find that ID in the Amazon EC2 console under Images or in AWS Explorer under Amazon EC2 AMIs Note that you must enter the ID in lowercase eg ami 12345678 Click Next Next and Deploy ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 23 of 42 Figure 17: Picking the G2 instance type and Your Custom AMI ID To ensure smooth builds and deployments of this solution with AWS Elastic Beanstalk make sure that the version of your AWS Toolkit for Visual Studio is 1920 or higher The first time you deploy your project with AWS Elastic Beanstalk it can take 5 10 minutes When it’s done you may notice that the console or the AWS Toolkit temporarily reports that the deployment is complete but with errors This can be disconcerting but if you wait another minute you should see the status change to success To run your application open AWS Explorer and expand the AWS Elastic Beanstalk node Fully expand your environment name and doubleclick it to see the status pane displayed The status will show as “L aunching ” for a few minutes When the status changes to “Environment is healthy” (again there could be a delay after it temporarily reports that the environment is unhealthy) click the URL at the top of the status pane This should launch your default browser and now you get to wait another couple of minutes while the application performs all seven matrix multiplications in the background To keep things simple the web application does not display a progress bar or use an AJAX framework (such as KnockoutJS) for partial updates (In your production code you would certainly ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 24 of 42 want to consider implementing a feature for the user to see the progress of the computation running in the background and to cancel it if desired) After running your application you may change your program and need to deploy it again Redeployment is much faster than an initial deployment In Visual Studio Solution Explorer rightclick the menu for the web project (again rightclick the project not the solution ) and then click Republish to Environment Using ebextensions with AWS Elastic Beanstalk When you run the web application the C++ DLL gets loaded into the IIS process on the web server This locks the file on the server disk which can prevent AWS Elastic Beanstalk from being able to overwrite it with a new version when you redeploy your application One workaround is to connect through VNC and restart the IIS service Another solution is to use the ebextensions feature that is built into AWS Elastic Beanstalk In Solution Explorer notice the folder in the C# project called ebextensions (prefaced by a dot) Any text files in this folder that have a file extension of config will be executed on the server after the deployment The only tricky thing is that Visual Studio opens config files in a different editor that doesn’t preserve line breaks so you need to rightclick the file and choose Open With Source Code (Text) Editor Here is the file: commands: restart iis: command: iisreset /restart waitForCompletion:0 This ebextensions file instructs AWS Elastic Beanstalk to run the iisreset command on the server For more information see the blog post “Customizing Windows Elastic Beanstalk Environments” Part 114 and Part 215 on the AWS NET Development blog ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 25 of 42 Model Code for Data Passed Between Controller and View The following code is the Model class in the web application In this application the data flows one way from the Controller to the View public class TaskResults { public int NumAlgorithms { get; set; } public int Dimension { get; set; } public string[] Description { get; set; } public string[] Time { get; set; } public string[] RelativeSpe edLabel { get; set; } public int[] PercentOfMax { get; set; } public string StatusMessage { get; set; } public string AMPDeviceName { get; set; } public TaskResults(int _NumAlgorithms) { NumAlgorithms = _NumAlgorithms; Description = new string[NumAlgorithms]; Time = new string[NumAlgorithms]; RelativeSpeedLabel = new string[NumAlgorithms]; PercentOfMax = new int[NumAlgorithms]; StatusMessage = stringEmpty; AMPDeviceName = stringE mpty; } } Accessing the Model in the View The following code is the first few lines of the file Indexcshtml You see that the TaskResults object created in the Controller is retrieved through the MVC ViewBag and then the @ syntax with the Razor Engine is used on the viewdata object to insert data (eg @viewdataAMPDeviceName ) from the Model into HTML ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 26 of 42 @using CSharpMatrixMultiplyModels; @{ ViewBagTitle = "Home Page"; var viewdata = ViewData["TaskResults"] as TaskResults; } <link href="//Content/MyStylescss" rel="stylesheet" type="text/css" /> <h3 style="font family:verdana">Matrix Multiplication Results (@viewdataDimension X @viewdataDimension)</h3> <h3 style="font family:verdana">AMP Default Device: @viewdataAMPDeviceNam e</h3> <h3 style="font family:verdana; color:red">@viewdataStatusMessage</h3> Controller Code to Invoke Each Algorithm and Populate the Model The following code is the main Controller class in the web application It invokes each algorithm (except the fir st one) three times calculates the average elapsed time and stores the results in the TaskResults class (the Model) enum Algorithms // this must exactly duplicate enum in C++ { CSharp_Basic = 0 CSharp_ImprovedSerial = 1 CSharp_TPL = 2 CPP_Basic = 3 CPP_PPL = 4 CPP_AMP = 5 CPP_AMPTiling = 6 }; delegate float[][] CSharpMatrixMultiply(float[][] A float[][] B int N); const int TESTLOOPS = 3; const int N = 1024; // matrix size must be multiple of C++ tilesi ze public unsafe ActionResult Index() { int NumAlgorithms = EnumGetNames(typeof(Algorithms))Length; var rand = new Random(); double[] durations = new double[NumAlgorithms]; ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 27 of 42 var TaskResults = new TaskResults(NumAlgorithms); TaskResultsDescription[0] = "C# Basic Serial (CPU)"; TaskResultsDescription[1] = "C# Improved Serial (CPU)"; TaskResultsDescription[2] = "C# Parallel with TPL (CPU)"; TaskResultsDescription[3] = "C++ Basic Serial (CPU)"; TaskResultsDescription [4] = "C++ Parallel with PPL (CPU)"; TaskResultsDescription[5] = "C++ Parallel with AMP (GPU)"; TaskResultsDescription[6] = "C++ Parallel with AMP Tiling (GPU)"; TaskResultsDimension = N; TaskResultsNumAlgorithms = NumAlgorithms; ViewData["TaskResults"] = TaskResults; // According to // http://wwwheatonresearchcom/content/choosing bestcarraytype matrixmultiplication // ragged arrays perform better in C# than 2D arrays for matrix multiplication float[][] A = Cre ateRaggedMatrix(N); float[][] B = CreateRaggedMatrix(N); FillRaggedMatrix(A N rand); FillRaggedMatrix(B N rand); // C++ doesn't need ragged arrays for performance and it's easier to marshall // and process the data as 2D arrays float[] A2 = new float[N N]; float[] B2 = new float[N N]; // for comparing results use the same random data in C++ as in C# CopyRaggedMatrixTo2D(A A2 N); CopyRaggedMatrixTo2D(B B2 N); // warm up AMP and get GPU name before timing var sb = new StringBuilder(256); CPPWrapperWarmUpAMP(sb sbCapacity); TaskResultsAMPDeviceName = sbToString(); //*** Basic C# Save this original result for future comparisons long start = DateTimeNowTicks; float[][] original = MatrixMultiplyBasic(A B N); long stop = DateTimeNowTicks; durations[0] = (stop start) / 100000000; if (!RunCSharpAlgorithm( original A B N MatrixMultiplySerial "C# Improved Serial" (int)AlgorithmsCSharp_ImprovedSerial TaskResults ref durations)) { return PartialView(); ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 28 of 42 } if (!RunCSharpAlgorithm( original A B N MatrixMultiplyTPL "C# TPL" (int)AlgorithmsCSharp_TPL TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N "C++ Basic" (int)AlgorithmsCPP_Basic TaskResults ref duratio ns)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N "C++ PPL" (int)AlgorithmsCPP_PPL TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N "C++ AMP" (int)AlgorithmsCPP_AMP TaskResults ref durations)) { return PartialView(); } if (!RunCPPAlgorithm(original A2 B2 N "C++ AMP Tiling" (int)AlgorithmsCPP_AMPTiling TaskResults ref durations)) { return PartialView(); } var slowest = durationsMax(); var fastest = durationsMin(); // populate the Model for the HTML table in the View for (int k = 0; k < NumAlgorithms; k++) { TaskResultsTime[k] = stringFormat ("{0:0000}" durations[k]); TaskResultsRelativeSpeedLabel[k] = stringFormat("{0:00}X" slowest / durations[k]); TaskResultsPercentOfMax[k] = (int)(fastest / durations[k] * 1000); } return PartialView(); } ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 29 of 42 bool RunCSharpAlgorithm( float[][] original float[][] A float[][] B int N CSharpMatrixMultiply function string FunctionName int AlgorithmIndex TaskResults results ref double[] durations) { double[] test_durations = new double[TESTLOOPS]; for (int k = 0; k < TESTLOOPS; k++) { long start = DateTimeNowTicks; float[][] C = function(A B N); test_durations[k] = (DateTimeNowTicks start) / 100000000; if (!CompareMatrixes(original C N)) { resultsStatusMessage = "Error verifying " + FunctionName; return false; } } durations[AlgorithmIndex] = test_durationsAverage(); return true; } unsafe bool RunCPPAlgorithm( float[][] original float[] A2 float[] B2 int N string FunctionName int AlgorithmIndex TaskResults results ref double[] durations) { double[] test_durations = new double[TESTLOOPS]; for (int k = 0; k < TESTLOOPS; k++) { // allocate memory in C# to simplify marshalling/deallocation float[] C2 = new float[N N]; long start = DateTimeNowTicks; fixed (float* pA2 = &A2[0 0]) fixed (float* pB2 = &B2[ 0 0]) fixed (float* pC2 = &C2[0 0]) { var error = new StringBuilder(1024); // allocate string memory ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 30 of 42 if (!CPPWrapperCallCPPMatrixMultiply(AlgorithmIndex pA2 pB2 pC2 N error errorCapacity) ) { resultsStatusMessage = errorToString(); return false; } } if (!CompareRaggedMatrixTo2D(original C2 N)) { resultsStatusMessage = "Error verifying " + Funct ionName; return false; } test_durations[k] = (DateTimeNowTicks start) / 100000000; } durations[AlgorithmIndex] = test_durationsAverage(); return true; } // Standard algorithm float[][] MatrixMultiplyBasic(fl oat[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // C is the result matrix for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) C[i][j] += A[i][k] * B[k][j]; return C; } // This function was developed by Heaton Research and is licensed under the Apache License Version 20 // available here: https://wwwapacheorg/licenses/LICENSE 20html // Improve the basic serial algorithm with optimized index order float[][] MatrixMultiplySerial(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // according to http://wwwheatonresearchcom/content/choosing bestc arraytypematrixmultiplication // this ikj index order performs the best for C# matrix multiplication for (int i = 0; i < N; i++) { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) { ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 31 of 42 crowi[j] += aik * browk[j]; } } } return C; } // Parallel algorithm using TPL float[][] MatrixMultiplyTPL(float[][] A float[][] B int N) { float[][] C = Create RaggedMatrix(N); ParallelFor(0 N i => { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) { crowi[j] += aik * browk[j]; } } }); return C; } C# Basic Serial (CPU) A basic algorithm for matrix multiplication is used as the baseline for the algorithms in subsequent sections There is only one optimization applied in this basic algorithm When using twodimensional arrays in the NET Framework method calls would ordinarily be made to the Array class Since the inner loop executes so many times that’s expensive But there is a simpl e workaround: use ragged arrays For example instead of declaring a 10x20 array like this: double[ ] MyArray = new double[1020]; declare it like this and create each row as a separate array of 20 columns in a for loop: double[][] MyArray = new double[10 ][]; for (int i = 0; i < 10; i++) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 32 of 42 MyArray[i] = new double[20]; Here is the code for basic matrix multiplication This will execute in serial fashion on the CPU: float[][] MatrixMultiplyBasic(float[][] A float[][] B int N) { float[][] C = CreateRagged Matrix(N); // C is the result matrix for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) for (int k = 0; k < N; k++) C[i][j] += A[i][k] * B[k][j]; return C; } C# Optimized Serial (CPU) The code for this algorithm was obtained from the article Choosing the Best C# Array Type for Matrix Multiplication16 By Heaton Research In the article the author writes several variations of the order of the for loop indexes and measures the timing of each For this whitepaper we are using the variation that was found to perform the best with the NET Framework 45 float[][] MatrixMultiplySerial(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); // according to http://wwwheatonresearchcom/content/choosing bestc arraytypematrixmultiplication // this ikj index order performs the best for C# matrix multiplication for (int i = 0; i < N; i++) { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k]; float aik = arowi[k]; for (int j = 0; j < N; j++) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 33 of 42 { crowi[j] += aik * browk[j]; } } } return C; } C# Parallel with TPL (CPU) The following code simply replaces the standard outer loop in the previous algorithm with a ParallelFor loop from the NET Framework Task Parallel Library (TPL) For more information see Matrix Multiplication in Parallel with C# and the TPL17 by James D McCaffrey float[][] MatrixMulti plyTPL(float[][] A float[][] B int N) { float[][] C = CreateRaggedMatrix(N); ParallelFor(0 N i => { float[] arowi = A[i]; float[] crowi = C[i]; for (int k = 0; k < N; k++) { float[] browk = B[k] ; float aik = arowi[k]; for (int j = 0; j < N; j++) { crowi[j] += aik * browk[j]; } } }); return C; } C++ Basic Serial (CPU) If you decide to build your own program you must follow the steps in the blog post How to use C++ AMP from C#18 on the Parallel Programming with NET ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 34 of 42 blog on MSDN If you only want to download and run the sample code provided with this whitepaper there is no need to follow that procedure because those steps have already been included in the Visual Studio solution One difference between our solution and the information in the blog post is that our solution uses all 64bit code When combining C# and C++ you need to be careful to use the same platform in each language The platform is usually set to Any CPU in C# but it must be changed to x64 in the Visual Studio Configuration Manager as shown in Figure 18 Figure 18: Setting the Platform to x64 in the Visual Studio Configuration Manager See the blog post Debugging VS2013 websites using 64bit IIS Express19 for additional helpful information Before you can invoke C++ functions from C# you need to declare them for P/Invoke on the C# side The following code shows the CPPWrapper class in the Controller folder in the Visual Studio solution As required these methods are declared with the unsafe keyword in C# Rather than create a public entry point for each C++ algorithm it was deemed a bit cleaner to create a single function to call each one based on the algorithm index passed in This simplifies the exception handling which had to be written in C++ I would have liked to write a single exception handler in C# for all the calls to the different algorithms ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 35 of 42 including C++ but it was necessary to write an error handler in C++ for the error codes that can be returned by C++ AMP public class CPPWrapper { [DllImport("CPPMatrixMultiplydll" CallingConvention = CallingConventionStdCall CharSet = CharSetUnicode)] public extern unsafe static bool CallCPPMatrixMultiply(int algorithm float* A float* B float* C int N StringBuilder error int errsize); [DllImport("CPPMatrixMultiplydll" CallingConvention = CallingConventionStdCall CharSet = CharSetUnicode)] public extern unsafe static void WarmUpAMP(StringBuilder buffer int bufsize); } Here is the C++ dispatcher function which is exported for C#: extern "C" __declspec (dllexport) bool _stdcall CallCPPMatrixMultiply(int algorithm flo at A[] float B[] float C[] int N wchar_t* error size_t errsize) { try { switch (algorithm) { case Algorithms::CPP_Basic: MatrixMultiplyBasic(A B C N); break; case Algorithms::CPP_PPL: MatrixMultiplyPPL(A B C N); br eak; case Algorithms::CPP_AMP: MatrixMultiplyAMP(A B C N); break; case Algorithms::CPP_AMPTiling: MatrixMultiplyTiling(A B C N); break; default: wcscpy_s(error errsize L"Invalid C++ algorithm index"); return false; } } catch (concurrency::runtime_exception& ex) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 36 of 42 { std::wstring result = stows(exwhat()); wcscpy_s(error errsize resultc_str()); return false; } return true; } Now that you’ve taken care of those preliminaries you’re ready to implement the C++ function for basic matrix multiplication It looks very similar to the basic algorithm in C# except that it doesn’t use ragged arrays and it introduces a temporary sum variable to reduce array references to the result array in the inner loop void MatrixMultiplyBasic(float A[] float B[] float C[] int N) { for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { float sum = 00; for (int k = 0; k < N; k++) { sum += A[i*N + k] * B[k*N + j]; } C[i*N + j] = sum; } } } C++ Parallel with PPL (CPU) The next optimization is to rewrite the serial C++ function as a parallel function This code will still be running on the CPU but it will give us an interesting comparison with the parallel code we’ll write later to run on the GPU In the past writing parallel code in Windows with the Win32 thread APIs was complicated There are still many difficulties in multithreaded programming but ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 37 of 42 now the Microsoft Parallel Patterns Library (PPL) makes it much easier For more information about PPL see the following: This article in the MSDN Library explains a parallel matrix multiplication algorithm written in C++ using PPL : How to: Write a parallel_for Loop20 This article describes several optimization techniques for writing parallel for loops in C++: C++11: Multicore Programming – PPL Parallel Aggregation Explained 21 Here’s the non optimized parallel C++ function: void MatrixMultiplyPPL(float A[] float B[] float C[] int N) { parallel_for(0 N [&](int i) { for (int j = 0; j < N; j++) { float sum = 00; for (int k = 0; k < N; k++) { sum += A[i* N + k] * B[k*N + j]; } C[i*N + j] = sum; } }); } C++ Parallel with AMP (GPU) Now you’re ready to write AMP code To get started you may want to review the blog post How to measure the performance of C++ AMP alglorithms22 on the Parallel Programming in Native Code blog on MSDN As that author points out there is overhead when AMP initializes itself on first use It enumerates the GPU devices in the system and picks the default one The idea of warming up AMP before timing it may or may not apply to your use case but the code provided with this whitepaper does implement such a function The following function returns the name of the GPU device so it can be displayed in the ASPNET MVC web page // Return the name of the default GPU device (or the emulator if no GPU exists) ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 38 of 42 // AMP will enumerate devices to initialize itself outside of the timing code extern "C" __declspec (dllexport) void _stdcall WarmUpAMP(wchar_t* buffer size_t bufsize) { accelerator default_device; wcscpy_s(buffer bufsize default_deviceget_description()c_str()); } String types in C# and C++ are not directly compatible but there are various ways to pass strings between them (this is called marshaling ) In all cases it’s important to pay attention to where the string memory is allocated and how it will be freed The P/Invoke declaration in C# must be carefully written to match the string passing technique you decide to use in C++ The technique used in the previous code is to allocate a StringBuilder object with a fixed capacity in C# before passing it into C++ That way the C# side is responsible for freeing the memory when the object goes out of scope which only happens after the C++ function is done writing to the memory The C++ code just copies the name of the GPU device into the buffer passed in from C# The next task is to adapt the parallel C++ matrix multiplication algorithm to use AMP The following AMP code is based on the Matrix Multiplication Sample23 on the Parallel Programming in Native Code blog on MSDN void MatrixMultiplyAMP(float A[] float B[] float C[] int N) { extent<2> e_a(N N) e_b(N N) e_c(N N); array_view<float 2> a(e_a A); array_view<float 2> b(e_b B); array_view<float 2> c(e_c C); cdiscard_data(); // avoid copying memory to GPU parallel_for_each(cextent [=](index<2> idx) restrict(amp) { int row = idx[0]; int col = idx[1]; float sum = 0; for (int inner = 0; inner < N; inner++) { ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 39 of 42 index<2> idx_a(idx[0] inner); index<2> idx_b(inner idx[1]); sum += a[idx_a] * b[ idx_b]; } c[idx] = sum; }); csynchronize(); } C++ Parallel with AMP Tiling (GPU) Finally let’s take another step with the AMP code to use a technique called tiling In a nutshell tiling is a method of optimizing the way the algorithm uses memory in the GPU When you call C++ AMP from C# there are four levels of memory you should be aware of: Managed memory This lives in RAM associated with the CPU and the NET Framework CLR managed process and is controlled by the NET Framework garbage collector Data passed between C# and C++ must be “marshaled” between managed and unmanaged memory according to very particular rules such as padding Unmanaged memory This also lives in RAM associated with the CPU but this memory space requires Win32 memory APIs and does not include a garbage collector Global memory on the GPU Programming in AMP requires that data be moved —with thread synchronization —between unmanaged memory and the GPU Registers associated with each thread on the GPU Accessing data in these registers can be 1000 times faster than GPU global memory so the idea is to move frequently accessed data into the registers But the registers aren’t large enough to hold an entire matrix so algorithms must be written to process one “tile” at a time and then move another tile into the registers and so on A full explanation of tiling is beyond the scope of this whitepaper but this article by Daniel Moth covers it well 24 Here is the C++ AMP code with a tiling algorithm: ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 40 of 42 const int TILESIZE = 8; // array size passed in must be a multiple of TILESIZE void MatrixMultiplyTiling(float A[] float B[] float C[] int N) { assert((N % TILESIZE) == 0); array_view<const float 2> a(N N A); array_view<const float 2> b(N N B); array_view<float 2> c(N N C); cdiscard_data(); parallel_for_each(cextenttile<TILESIZE TILESIZE>() [=](tiled_index<TILESIZE TILESIZE> t_idx) restrict(amp) { int row = t_idxlocal[0]; int col = t_idxlocal[1]; tile_static float locA[TILESIZE][TILESIZE]; tile_static float locB[TILESIZE][TILESIZE]; float sum = 0; for (int i = 0; i < aextent[1]; i += TILESIZE) { locA[row][col] = a(t_idxglobal[0] col + i); locB[row][col] = b(row + i t_idxglobal[1]); t_idxbarrierwait(); for (int k = 0; k < TILESIZE; k++) sum += locA[row][k] * locB[k][col]; t_idxbarrierwait(); } c[t_idxglobal] = sum; }); csynchronize(); } Conclusion This whitepaper demonstrated how to set up the G2 instance type in Amazon EC2 with Windows Server The NVIDIA GPU on those instances provides 1536 cores that developers can use for computeintensive application functions But programming the GPU requires the C or C++ language whereas most Windows developers are using C# This article showed how to pass data between C# and ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 41 of 42 C++ and how to use the C++ AMP library to make GPU programming accessible and highly productive for C# web developers on Windows The tiled matrix multiplication algorithm written in C++ AMP was hundreds of times faster than the basic algorithm written in C# Further Reading AWS Toolkit for Visual Studio25 AWS for Windows and NET Developer Center26 Getting Started with Amazon EC2 Windows Instances27 Elastic Beanstalk Documentation28 C++ AMP documentation29 ASPNET MVC documentation30 Notes 1 http://d0awsstaticcom/whitepapers/CSharpMatrixMultiplyzip 2 https://bitbucketorg/multicoreware/cppampdriverng/overview 3 https://ampalgorithmscodeplexcom/documentati on 4 http://blogsmsdncom/b/nativeconcurrency/archive/2012/01/30/camp sampleprojectsfordownloadaspx 5 http://awsamazoncom/ec2/instancetypes/ 6 http://wwwnvidiacom/object/cuda_home_newhtml 7 http://awsamazoncom/visualstudio/ 8 http://awsamazoncom/free/ 9 http://wwwmicrosoftcom/enus/download/detailsaspx?id=40784 10 http://wwwrealvnccom/ 11 http://wwwrealvnccom/ 12 http://wwwNVIDIAcom/download/driverResultsaspx/74642/en us 13 http://d0awsstaticcom/whitepapers/CSharpMatrixMultiplyzip ArchivedAmazon Web Services – Optimizing ASPNET with C++ AMP on the GPU April 2015 Page 42 of 42 14 http://blogsawsamazoncom/net/post/Tx1RLX98N5ERPSA/Customizing Windows ElasticBeanstalkEnvironmentsPart1 15 http://blogsawsamazoncom/net/post/Tx2EMAYCXUW3HAK/Customizing Windows ElasticBeanstalkEnvironmentsPart2 16 http://wwwheatonresearchcom/content/choosingbestcarraytypematrix multiplication 17 http://jamesmccaffreywordpresscom/2012/04/22/matrixmultiplication in parallelwithca ndthetpl/ 18 http://blogsmsdncom/b/pfxteam/archive/2011/09/21/10214538aspx 19 http://blogsmsdncom/b/rob/archive/2013/11/14/debuggingvs2013 websites using 64bit iisexpressaspx 20 http://msdnmicrosoftcom/enus/library/dd728073aspx 21 https://katyscodewordpresscom/2013/08/17/c11multi coreprogramming pplparallelaggregationexplained/ 22 http://blogsmsdncom/b/nativeconcurrency/archive/2011/12/28/how to measuretheperformance ofcamp algorithmsaspx 23 http://blogsmsdncom/b/nativeconcurrency/archive/2011/11/02/matrix multiplicationsampleaspx 24 http://msdnmicrosoftcom/enus/magazine/hh882447aspx 25 http://awsamazoncom/visualstudio/ 26 http://awsamazoncom/net/ 27 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win_GetSt artedhtml 28 http://docsawsamazoncom/elasticbeanstalk/latest/dg/customize containerswindowsec2html 29 http://msdnmicrosoftcom/enus/library/hh265137aspx 30 http://wwwaspnet/mvc
|
General
|
consultant
|
Best Practices
|
Optimizing_Electronic_Design_Automation_EDA_Workflows_on_AWS
|
ArchivedOptimizing Electronic Design Automation (EDA) Workflows on AWS September 2018 This version has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/semiconductordesign onaws/semiconductordesignonawshtmlArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Abstract vi Introduction 1 EDA Overview 1 Benefits of the AWS Cloud 2 Improved Productivity 2 High Availability and Durability 3 Matching Compute Resources to Requirements 3 Accelerated Upgrade Cycle 4 Paths for Migrating EDA Workflows to AWS 5 Data Access and Transfer 5 Consider what Data to Move to Amazon S3 5 Dependencies 6 Suggested EDA Tools for Initial Proof of Concept (POC) 7 Cloud Optimized Traditional Architecture 7 Buildi ng an EDA Architecture on AWS 8 Hypervisors: Nitro and Xen 9 AMI and Operating System 9 Comp ute 11 Network 15 Storage 15 Licensing 23 Remote Desktops 25 User Authent ication 27 Orchestration 27 Optimizing EDA Tools on AWS 29 Amazon EC2 Instance Types 29 Archived Operating System Optimization 30 Networking 36 Storage 36 Kernel Virtual Memory 37 Security and Governance in the AWS Cloud 37 Isolated Environments for Data Protection and Sovereig nty 38 User Authentication 38 Network 38 Data Storage and Transfer 40 Governance and Monitoring 42 Contributors 44 Document Revisio ns 44 Appendix A – Optimizing Storage 45 NFS Storage 45 Appendix B – Reference Architecture 47 Appendix C – Updating the Linux Kernel Command Line 49 Update a system with /etc/default/grub file 49 Update a system with /boot /grub/grubconf file 50 Verify Kernel Line 50 Archived Abstract Semiconductor and electronics companies using e lectronic design automation (EDA ) can significantly accelerate the ir product development lifecycle and time to market by taking advantage of the near infinite compute storage and resources available on AWS This white paper present s an overview of the EDA workflow recommendations for moving EDA tools to AWS and the specific AWS architectural components to optimize EDA work loads on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 1 Introduction The workflows applications and methods used for the design and verification of semiconductors integrated circuits (ICs) and printed circuit boards (PCBs) have been largely unchanged since the invention of computer aided engineering (CAE) and electronic design automation (EDA) software However as electr onics systems and integrated circuits have become more complex with smaller geometries the comput ing power and infr astructure requirements to design test validate and build these systems have grown significantly CAE EDA and emerging workloads such as computational lithography and metrology have driven the need for massive scale computing and data management in next generation electronic products In the semiconductor and electronics sector a large portion of the overall design time is spent verif ying components for example in the characterization of intellectual property (IP) cores and for full chip functional and timing verifications EDA support organizations —the specialized IT teams that provid e infrastru cture support for semiconductor companies —must invest in increasingly large server farms and high performance storage systems to enable high er quality and fast er turnaround of semiconductor test and validat ion The introduction of new and upgraded IC fabri cation technologies may require large amounts of compute and storage for relatively short times to enable rapid completion of hardware regression testing or to recharacterize design IP Semiconductor companies today use Amazon Web Services ( AWS ) to take advantage of a more rapid flexible deployment of CAE and EDA infrastructure from the complete IC design workflow from register transfer level (RTL) design to the delivery of GDSII files to a foundry for chip fabrication AWS compute storage and higher level services are available on a dynamic asneeded basis with out the significant up front capital expenditure that is typically required for performance critical EDA workloads EDA Overview EDA workloads comprise workflow s and a supporting set of software tools that enable the efficient design of microelectronics and in particular semiconductor integrated circuits (ICs) Semiconductor design and verification relies on a set of commercial or open source tools collectively referred to as EDA softw are which expedites and reduces time to silicon tape out and fabrication EDA is a highly iterative engineering process that can take from months and in some cases years to produce a single integrated circuit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 2 The increasing complexity of integrated circuits has resulted in a n increased use of preconfigured or semi customized hardware components collectively known as intellectual property (IP) cores These cores (provided by IP developers as generic gate level netlists ) are either designed inhouse by a semiconductor company or purchased from a third party IP vender IP cores themselves requires EDA workflows for design and verification and to characteriz e performance for specific IC fabrication technologies The se IP cores are used in co mbination with ICspecific custo mdesigned components to create a complete IC that often includes a complex system onchip (SoC) making use of one of more embedded CPUs standard peripherals I/O and custom analog and/or digital components The complet e IC itself with all its IP cores and custom components then requires large amounts of EDA processing for full chip verification —including modeling (that is simulat ing) all of the components within the chip This modeling which includes HDL source level validation physical synthesis and initial verification (for example using techniques such as formal verification) is known as the front end design The physical implementation which includes floor planning place and route timing analysis design rulecheck (DRC) and final verification is known as the back end design When the back end design is complete a file is produced in GDSII format The production of this file is known for historical reasons as tapeout Wh en completed the file is sent to a fabrication facility (a foundry ) which may or may not be operated by the semiconductor company where a silicon wafer is man ufactured This wafer containing perhaps thousands of individual ICs is then inspected cut into dies that are themselves tested packaged into chips that are tested again and assembled onto a board or other system through highly automated manufacturing processes All of these steps in the semiconductor and electronics supply chain can benefit from the scalability of cloud Benefits of the AWS Cloud Before discussing the specific s of moving EDA workloads to AWS it is worth noting the benefits of cloud computing on the AWS Cloud Improved Productivity Organizations that move to the cloud can see a dramatic improvement in development productivity and time to market Your organization can achieve this by scaling out your compute needs to meet the demands of the job s waiting to be processed AWS uses per ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 3 second billing for our compute resources allowing you to optimize cost by only paying for w hat you use down to the second By scaling horizontally you can run more compute servers (that is Amazon Elastic Compute Cloud [Amazon EC2 ] instances) for a shorter period of time and pay the same amount as if you were running fewer servers for a longer period of time For example because the number of compute hours consumed are the same you could complete a 48 hour design regression in just two hours by dynamically growing your cluster by 24X or more in order to run many thousands of pending jobs in parallel These extreme levels of parallelism are commonplace on AWS across a wide variety of industries and performance critical use cases High Availability and Durability Amazon EC2 is hosted in multiple locations worldwide These locations comprise regions and Availability Zones (AZs) Each AWS R egion is a separate geographic area around the wo rld such as Oregon Virginia Ireland and Singapore Each AWS Region where Amazon EC2 runs is designed to be completely isolated from the other regions This design achieves the greatest possible fault tolerance and stability Resources are not replicate d across regions unless you specifically configure your services to do so Within e ach geographic region AWS has multiple isolated locations known as Availability Zones Amazon EC2 provides you the ability to place resources such as EC2 instances and d ata in multiple locations using these Availability Zones Each Availability Zone is isolated but the Availability Zones in a region are connected through low latency links By taking advantage of both multiple regions and multiple Availability Zones you can protect against failures and ensure you have enough capacity to run even your most compute intensive workflows Additionally this large global footprint enables you to position computing resources near your IC design engineers in situations where low latency performance is important For more information refer to AWS Global Infrastructure Matching Compute Resources to Requirements AWS offers many different configurations of hardware called instance families in order to enable customers to match their compute needs with those of their jobs Because of this and the on demand nature of the clo ud you can get the exact systems you need for the exact job you need to perform for only the time you need it ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 4 Amazon EC2 instances come in many different sizes and configurations These configurations are built to support jobs that require both large and small memory footprints high core counts of the latest generation processors and storage requirements from high IOPS to high throughput By right sizing the instance to the unit of work it is best suited for you can achieve high er EDA performance at lo wer overall cost You no longer need to purchase EDA cluster hardware that is entirely configured to meet the demands of just a few of your most demanding jobs Instead you can choose servers launch entire clusters of servers and scale these clusters up and down uniquely optimiz ing each cluster for specific applications and for specific stages of chip development For example consider a situation where you ’re performing gate level simulations for a period of jus t a few weeks such as during the development of a critical IP core In this example y ou might need to have a cluster of 100 machines (representing over 2 000 CPU cores) with a specific memory tocore ratio and a specific storage configuration With AWS you can deploy and run th is cluster dedicated only for this task for only as long as the simulations require and then terminate the cluster when that stage of your project is complete Now consider another situation in which you have multiple semicondu ctor design teams working in different geographic regions each using their own locally installed EDA IT infrastructure This geographic diversity of engineering teams has productivity benefits for modern chip design but it can create challenges in managi ng large scale EDA infrastructure (for example to efficiently utilize globally licensed EDA software ) By using AWS to augment or replace these geographically separated IT resources you can pool all of your global EDA licenses in a smaller number of locations using scalable on demand clusters on AWS As a result you can more rapidly complete critical batch workloads such as static timing analysis DRC and physical verification Accelerated Upgrade Cycle Another important reason to move EDA workloads to the cloud is to gain access to the latest processor storage and network technologies In a typical on premise s EDA installation you must select configure procure and deploy servers and storage d evices with the assumption that they remain in service for multiple years Depending on the selected processor generation and time ofpurchase this means that performance critical production EDA workloads might be running on hardware devices that are already multiple years and multiple processor generations out of date When using AWS you have the opportunity to select and deploy the latest processor generations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 5 within minutes and configure your EDA clusters to meet the unique needs of each application in your EDA workflow Paths for Migrating EDA Workflows to AWS When you begin the migration of EDA workflows to AWS you will find there are many parallels with managing traditional EDA deployments across multiple data centers Larger organizations in the semiconductor industry typically have multiple data centers that are geographically segregated because of the distributed nature of their design teams These organizations typically choose specific workloads to run in specific locations or replicate and synchronize data to allow for multiple sites to take the load of large scale global EDA workflows If your organization uses this approach you need to consider that the specifics around topics such as data replication caching and license server managem ent depend on many internal and organizational factors Most of the same approaches and design decisions related to multiple data centers also apply to the cloud With AWS you can build one or more virtual data centers that mirror existing EDA data center designs The foundational technologies that enable things like compute resources storage servers and user workstations are available with just a few keystrokes However the real power of using the AWS Cloud for EDA workloads comes from the dynamic capa bilities and enormous scale provided by AWS Data Access and Transfer When you first consider running workloads in the cloud you might envision a bursting scenario where cloud resources are set up as an augmentation to your existing on premises compute cl uster Although you can use this model successfully data movement presents a significant challenge when building an architecture to support bursting in a seamless way Your organization might see the most benefit if you consider bursting on a project byproject basis and choose to run entire workflows on AWS thereby freeing up existing on premises resources to handle other tasks By approaching cloud resources this way you can use much simpler data transfer mechanisms because you are not trying to sync d ata between AWS and your data centers Consider what Data to Move to Amazon S3 Prior to moving your EDA tools to AWS consider the process es and methods that will be in place as you move from initial experiments to full production For example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 6 consider what data will be needed for an initial performance test or for a first workflow proof of concept (POC) Data is gravi ty and moving only the limited amount of data needed to run your EDA tools to an Amazon Simple Storage Service (Am azon S3) bucket allows for flexibly and agility when building and iterating your architecture on AWS There are several benefits to storing data in Amazon S3; for an EDA POC using Amazon S3 allow s you to iterate quickly as the S3 transfer speed to an EC2 instance is up to 25 Gbps With your data stored in an S3 bucket you can more quickly experiment with different EC2 instance types and also experiment with different working storage options such as creating and tuning temporary shared file systems Deciding what data to transfer is dependent on the tools or designs you are planning to use for the POC We encourage customers to start with a relatively small amount of POC data ; for example only the data required to run a single simulation job Doing so allows you to q uickly gain experience with AWS and build an understanding of how to build production ready architecture on AWS while in the process of running an initial EDA POC workload Dependencies Semiconductor design environments often have c omplex dependencies that can hinder the process of moving workflows to AWS We can work with you to build an initial proof of concept or even a complex architecture However it is the designer ’s or tool engineer’s responsibility to unwind any legacy on premises data dependencies The initial POC process require s effort to determine which dependencies such as shared libraries need to be moved along with project data There are tools available that help you bui ld a list of dependencies and some of these tools yield a file manifest that expedite s the process of moving data to AWS For example one tool is Ellexus Container Checker which can be found on the AWS Marketplace Dependencies can include authentication methods ( for example NIS) shared file systems cross organization collaboration and globally distributed designs (Identifying and managing such dependencies is not unique to cloud migration; semiconductor design teams face similar challenges in any distributed EDA environment) Another approach may be to launch a net new semiconductor project on AWS which should significantly reduce the number of legacy dependencies ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 7 Suggested EDA T ools for Initial Proof of Concept (POC) An HDL compile and s imulation workflow may be the fastest approach to launching an EDA POC on AWS or creating a production EDA environment HDL files are typically not large and the ability to use an on premises license server (via VPN) reduces the additional effort of moving your licensing environment to AWS HDL compile and simulation workflows are representative of other EDA workloads including their need for shared file systems and some form of job scheduling Cloud Optimized Traditional Architecture On AWS compute and storage resources are available on demand allowing you to launch on what you need and when you need it This enables a different approach to architecting your semiconductor design environment Rather than having one large cluster where multiple projects are running you can use AWS to launc h multiple clusters Because you can configure compute resources to increase or decrease on demand you can build clusters that are specific to different parts of the workflow or even specific projects This allows for many benefits including project based cost allocation right size compute and storage and environment isolation Figure 1: Workload specific EDA clusters on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 8 As seen in Figure 1 moving to AWS allows you to launch a separate set of resources for each of you r EDA work load s (for example a cluster) This multi cluster approach can also be u sed for global and cross organization al collaboration The multi cluster approach can be used for example to dedicate and manage specific cloud resources for specific projects encouraging organizations to use only the resources required for their project Job Scheduler Integration The EDA workflow that you build on AWS can be a similar environment to the one you have in your on premises data center Many if not all of the same EDA tools and applications running in your data center as well as orchestration software can also be run on AWS Job schedulers such as IBM Platform LSF Adaptive PBS Pro and Univa Grid Engine (or their open source alternatives) are typically used in the EDA industry to manage compute resources optimize license usage and coordinate and prioritize jobs When you migrate to AWS you may choose to use these existing schedulers essentially unchanged to minimize the impact on your end user workflows and processes Most of these job schedulers already have s ome form of native integration with AWS allowing you to use the master node to automatically launch cloud resources when there are jobs pending in the queue You should refer to the documentation of your specific job management tool for the steps to autom ate resource allocation and management on AWS Building an EDA Architecture on AWS Building out your production ready EDA workflow on AWS requires an end toend examination of you r current environment This examination begin s with the operating system you are using for running your EDA tools as well as your job scheduling and user management environments AWS allows for a mix of architectures when moving semiconduct or design workloads and you can leverage s ome combination of the following two approaches : • Build an architecture similar to a traditional cluster using traditional job scheduling software but ensuring that a cloud native approach is used • Use more cloud native methods such as AWS Batch which uses containerization o f your applications Where needed we will make the distinction when using AWS Batch can be advantageous for example when running massively parallel parameter sweeps ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 9 Hypervisors: Nitro and Xen Amazon EC2 instances use a hypervisor to divide resources on the server so that each customer has separate CPU memory and storage resources for just that customer’s instance We do not use the hypervisor to share resources between instances except for the T* family On previous generation instance types for ex ample the C4 and R4 families EC2 instances are virtualized using the Xen hypervisor In current generation instances for example C5 R5 and Z1d we are using a specialized piece of hardware and a highly custom ized hypervisor based on KVM This new hyper visor system is called Nitro At the time of this writing these are the Nitro based instances: Z1d C5 C5d M5 M5d R5 R5d Launching Nitro based instances require s that specific drivers for networking and storage be installed and enabled before the in stance can be launched We provide the details for this configuration in the next section AMI and Operating System AWS has built in support for numerous operati ng systems (OSs) For EDA users CentOS Red Hat Enterprise Linux and Amazon Linux 2 are used more than other operating systems The operating system and the customizations that you have made in your on premises environment are the baseline for buildi ng out your EDA architecture on AWS Before you can launch an EC2 ins tance you must decide wh ich Amazon Machine Image (AMI) to use An AMI contains the OS any required OS and driver customizations and may also include the application software For EDA o ne approach is to launch an instance from an existing AMI customize the instance after launch and then save this updated configuration as a custom AMI Instances launched from this new custom AMI include the customizations that you made when you created the AMI ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 10 Figure 2: Use Amazon provided AMI to build a Customized AMI Figure 2 illustrate s the process of launching an instance with an AMI You can select the AMI from the AWS Console or from the A WS Marketplace and then customize that instance with your EDA tools and environment After that you can use the customized instance to create a new customized AMI that you can then use to launch your entire EDA environment on AWS Note also that the cus tomized AMI that you create using this process can be further customized For example you can customize the AMI to add additional application software load additional libraries or apply patches each time the customized AMI is launched onto an EC2 insta nce As of this writing we recommend these OS levels for EDA tools (more detail on OS versions is provided in following sections) : • Amazon Linux and Amazon Linux 2 ( verify certification with EDA tool vendor s) • CentOS 74 or 75 • Red Hat Enterprise Linux 74 or 75 These OS levels have the necessary drivers already included to support the current instance ty pes which include Ni tro based instances If you are not using one of these levels you must perform extra steps to take advantage of the features of our current instances Specifi cally you must build and enable enhanced networkin g which relies on ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 11 the elastic network adaptor (ENA) drivers See Network and Optimizing EDA Tools on AWS for m ore detail ed information on ENA drivers and AMI drivers If you use an instance with Nitro (Z1d C 5 C5d M5 M5d R 5 R5d ) you must use an AMI that has the AWS ENA driver built and en abled and the NVMe drivers installed At this time a Nitro based instance does not launch unless you have these drivers These OS levels include the required drivers : • CentOS 74 or later • Red Hat Enterprise Linux 74 or later • Amazon Linux or Amazon L inux 2 (current versions) To verify that you can launch your AMI on a Nitro based instance first launch the AMI on a Xen based instance type and then run the c5_m5_checks_scriptsh script found on the awslabs GitHub repo at awslabs/aws support tools/EC2/C5M5InstanceChecks/ The script analyze s your AMI and determine s if it can run on a Nitro based instance If it cannot it display s recommended changes You can also import your own on premises image to use for your AMI This process includes extra steps but may result in time savings Before importing an on premises OS image you first require a VM image for y our OS AWS supports certain VM formats (for example Linux VMs that use VMware ESX ) that must be uploaded to an S3 bucke t and subsequently converted into an AMI Detailed information and instructions can be found at https://awsamazoncom/ec2/vm import/ The same operati ng system requirements mentioned above are also applicable to import ed images (that is you shou ld use CentOS/RHEL 74 or 75 Amazon Linux or Amazon Linux 2) Compute Although AWS has many different types and sizes of instances the instance types in the compute optimized and memory optimized categories are typically best suited for EDA workloads When running EDA software on AWS you should choose instances that feature the lat est generations of Intel Xeon processors using a few different configurations to meet the needs of each application in your overall workflow ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 12 The compute optimized instance family features instances that have the highest clock frequencies available on AWS and typically enough memory to run some memory intensive workloads Typical EDA use cases for compute optimized instance types: • Simulations • Synthesis • Formal verification • Regression tests Z1d for EDA Tools AWS has recently announced a powerful new insta nce type that is well optimized for EDA applications The faster clock speed on the Z1d instance with up to 4 GHz sustained Turbo performance allows for EDA license optimization while achieving faster time to results The Z1d uses an AWS specific Intel Xeon Platinum 8000 series (Skylake) processor and is the fastest AWS instance type The following list summarizes the features of the Z1d instance: • Sustained all core frequency of up to 40 GHz • Six different instance sizes with u p to 24 cores (48 threads) per instance • Total memory of 384 GiB • Memory to core ratio of 16 GiB RAM per core • Includes local Instance Store NVMe storage (as much as 18 TiB) • Optimized for EDA and other high performance worklo ads Additional Compute Optimized Instances C5 C5d C4 In addition to the Z 1d t he C5 instance feature s up to 36 cores (72 threads) and up to 144 GiB of RAM The processor used in the C5 is the same as the Z1d the Intel Xeon Platinum 8000 series (Skylake) but also includes a base clock speed of 30 GHz and the ability to turbo boost up to 35 GHz The C5d instance is the same configuration as the C5 but offers as much as 18 TiB of local NVMe SSD storage ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 13 Previous generation C4 ins tances are also commonly used by EDA customers and still remain a suitable option for certain workloads such as those that are not memory intensive Memory Optimized Instances Z1d R5 R5d R4 X1 X1e The Z1d instance is not only compute optimized but m emory optimized as well including 384 GiB of total memory The Z1d has the highest clock frequency of any instance and with the except ion of our X1 and X1e instances is equal to the most memory per core (16 GiB/core) If you require larger amounts of memory than what is available on the Z1d consider another memory optimized instance such as the R5 R5d R4 X1 or X1e Typical EDA use cases for memory optimized instance types: • Place and route • Static timing analysis • Physical verification • Batch mode RTL simulation (multithread optimized tools ) The R5 and R5d have the same processor as the Z1d and C5 the Intel Xeon Platinum 8000 series (Skylake) With the largest R5 and R5d instance types having up to 768 GiB memory E DA workloads that could previously only run on the X1 or X1e can now run on the R5 and R5d instances These recently released instances are serving as a drop in replacement for the R4 instance for both place and route as well as batch mode RTL simulatio n The R416xlarge instance is viable option with a high core count (32) and 15 GiB/core ratio For this reason w e see a large number of customers using the R416xlarge instance type The X1 and X1e instance types can also be used for memory intensive wo rkloads ; however testing of EDA tools by Amazon internal silicon teams has indicate d that most EDA tools will run well on the Z1d R4 R5 or R5d instances The need for the amount of memory provided on the X1 (1952 GiB) and X1d (3904 GiB) has been relatively infrequent for semiconductor design Hyper Threading Amazon EC2 instances support Intel Hyper Threading Technology (HT Technology) which enables multiple threads to run concurrently on a single Intel Xeon CPU core ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 14 Each thread is repr esented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type Each vCPU is a hyperthread of an Intel Xeon CPU core except for T2 instances You can specify the following CPU option s to optimize your instance for semiconductor design workloads: • Number of CPU cores : You can customize the number of CPU cores for the instance This customization may optimize the licensing costs of your software with an instance that has sufficient amoun ts of RAM for memory intensive workloads but fewer CPU cores • Threads per core : You can disable Intel Hyper Threading Technology by specifying a single thread per CPU core This scenario applies to certain workloads such as high performance computing (HPC) workloads You can specify these CPU options during instance launch (curren tly on support through the AWS Command Line Interface [ AWS CLI] an AWS software development kit [ SDK ] or the Am azon EC2 API only) There is no additional or reduced charge for specifying CPU options You are charged the same amount as instances that are launched with default CPU options Refer to Optimizing CPU Options in the Amazon Elastic Compute Cloud User Guide for Linux Instances for m ore details and rules for specifying CPU options Divide the vCPU number by 2 to find the number of physical cores on the instance You can disable HT Technology if you determine that it has a negative impact on your application ’s performance See Optimizing EDA Tools on AWS for details on disabling Hyper Threading Table 1 lists the instance types that are typically used for EDA tools Table 1: Instance specifications suitable for EDA workloads Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe Z1d 24 40 GHz 384 16 Yes R5 / R5d 48 Up to 31 GHz 768 16 Yes on R5d R4 32 23 GHz 488 1525 M5 / M5d 48 Up to 31 GHz 384 8 Yes on M5d C5 / C5d 36 Up to 35 GHz 144 4 Yes on C5d ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 15 Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe X1 64 23 GHz 1952 305 Yes X1e 64 23 GHz 3904 61 Yes C4 18 29 GHz boost to 35 60 333 *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Network Amazon e nhanced networking technology enables instances to communicate at up to 25 Gbps for current generation instances and up to 10 Gbps for previous generation instances In addition enhanced networkin g reduces latency and network jitter Enhanced networking is enabled by default on these operating system levels : ▪ Amazon Linux ▪ Amazon Linux 2 ▪ CentOS 74 and 75 ▪ Red Hat Enterprise Linux 74 and 75 If you have an older version of Cent OS or R HEL you can enable enhanced networking by installing the network module and updat ing the enhanced network adapter ( ENA ) support attribute for the instance For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Storage For EDA workloads running at scale on any infrastructure storage can quickly become the bottleneck for pushing jobs through the queue Traditional centralized filers serving network file systems ( NFS ) are commonly purchased from hardware vendors at significant costs in support of high EDA throughout However these centralized filers can quickly become a bottleneck for EDA resulting in increased job run times and correspondingly higher EDA license cost s Planned or unexpected increases in EDA data and the need to access that data across a fast growing EDA cluster means that the filers eventually run out of storage space or become bandwidth constrained by either the network or storage tier ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 16 EDA a pplica tions can take advantage of the wide array of storage options available on the AWS resulting in reduced run times for large batch workloads Achieving these benefits may require some amount of EDA workflow rearchitecting but the benefits of making these optimizations can be numerous Types of Storage on AWS Before discussing the differ ent options for deploying EDA storage it is important to understand the different types of storage services available on AWS Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS cloud EBS volumes are attached to instances over a high bandwidth network fabric and appear as local block storage that can be formatted with a file system on the instance itself Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes offer the consistent a nd low latency performance required to run semiconductor workloads When selecting your instance type you should select an instance that is Amazon EBS optimized by default An Amazon EBS optimiz ed instance provides dedicated throughput to Amazon EBS whic h is isolated from any other network traffic and an optimized configuration stack to provide optimal Amazon EBS I/O performance If you choose an instance that is not Amazon EBS optimized you can enable Amazon EBS optimization by using ebsoptimized with the modifyinstanceattribute parameter in the AWS CLI but additional charges may apply (cost is include d with instances where Amazon EBS is optimiz ed by default) Amazon EBS is the storage that backs all modern Amazon EC2 instances (with a few exceptions) and is the foundat ion for creating high speed file systems on AWS With Amazon EBS it is possible to achieve up to 80000 IOPS and 1750 MB/s from a single Amazon EC2 instance It is important to choose the correct EBS volume types when building your EDA architecture on AWS Table 2 shows the EBS volumes types that you should consider ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 17 Table 2: EBS Volume Types io1 gp2* st1 sc1 Volume Type Provisioned IOPS SSD General Purpose SSD Throughput Optimized HDD Cold HDD Volume Size 4 GB 16 TB 1 GB 16 TB 500 GB 16 TB 500 GB 16 TB Max IOPS**/Volume 32000 10000 500 250 Max Throughput/Volume 500 MB/s 160 MB/s 500 MB/s 250 MB/s *Default volume type **io1/gp2 based on 16K I/O size st1/sc1 based on 1 MB I/O size When choosing your EBS volume types consider the performance characteristics of each EBS volume This is particularly important when building a NFS server or another file system solutions Achieving the maximum capable performance of an EBS volume depend s on the size of the volume Additionally the gp2 st1 and sc1 volume types use a burst credit system and this should be taken in to consideration as well Each AWS EC2 instance type has a throughput and IOPS limit For example the Z1d12xlarge has EBS limits of 175 GB/s and 80000 IOPS (For a c hart that shows the Amazon EBS throughput expected for each instance type refer to Instance Types that Support EBS Optimization in the Amazo n Elastic Compute Cloud User Guide for Linux Instances ) To achieve these speeds you must stripe multiple EBS volumes together as each volume has its own throughput and IPOS limit Refer to Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide for Linux Instances for detailed information about throughput IOPS and burst credits Enhancing Scalability with Dynamic EBS Volumes Semiconductor design has a long history of over provisioning hardware to meet the demands of backend workloads that may not be run for months or years after the customer specifications are received On AWS you provision only the resources you need when you need them For the typic al on premises EDA cluster IT teams are accustomed to purchasing large arrays of network attached storage even though their initial needs are relatively small ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 18 A key feature of EBS storage is elastic volumes ( available on all current generation EBS volu mes attached to current generation EC2 instances ) This feature allows you to provision a volume that meets your application requirements today and as your requirements change allows you to increase the volume size adjust performance or change the volu me type while the volume is in use You can continue to use your application while the change takes effect An on premises installation normally require s manual intervention to adjust storage configurations Leveraging EBS elastic volumes and other AWS ser vices you can automate the process of resi zing your EBS volumes Figure 3 shows the automated process of increasing the volume size using Amazon CloudWatch (metrics and monitoring service and AWS Lambda (an event driven serverless compute service ) The volume increase event is trigger ed (eg usage threshold) using a CloudWatch alarm and a Lamba function T he resulting increase is automatically detected by the operat ing system and a subsequent file system grow operation resize s the file system Figure 3: Lifecycle for automatically resizing an EBS volume Instance Storage For use cases where the performance of Amazon EBS is not sufficient on a single instance Amazon EC2 instances with Instance Store are available Instance Store is block level storage that is physically attached to the instance As the storage is directly attached to the instance it can provide signi ficantly higher throughput and IOPS than is ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 19 available through network based storage similar to Amazon EBS However because the storage is locally attached to the instance data on the Instance Store does not persist when you stop or terminate the instance Additionally hardware failures on the instance would likely result in data loss For these reasons i nstance Store is recommended for temporary scratch space or for data replicated off of the instance ( for example Amaz on S3) You can increase durability by choosing an instance with multiple NVMe devices and create a RAID set with one or more parity devices The I3 instance family and the recently announced Z1d C5d M5d and R5d instances are wellsuited for EDA workloa ds requiring a significant amount of fast local storage such as scratch data These instances use NVMe based storage devices and are designed for the highest possible IOPS The Z1d and C5d instances each have up to 18 TiB of local instance store and the R5d and M5d instances each have up to 36 TiB of local instance store The i316xlarge can deliver 33 million random IOPS at 4 KB block size and up to 16 GB/s of sequential disk throughput This performance m akes the i316xlarge well suited for serving file systems for scratch or temporary data over NFS Table 3 shows the instance types typically found in the semiconductor space that have instance store Tab le 3: Instances typically found in the EDA space with Instance Store Instance Name Max Raw Size TiB Number and size of NVMe SSD (GiB) I3 152 TiB 8 x 1 920 Z1d 18 TiB 2 x 900 R5d 36 TiB 4 x 900 M5d 36 TiB 4 x 900 C5d 18 TiB 2 x 900 X1 3840 TiB 2 x 920 X1e 3840 TiB 2 x 1920 The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 20 encryption keys are dest royed when the instance is stopped or terminated and cannot be recovered You cannot disable this encryption and you cannot provide your own encryption key1 NVMe on EC2 Instances Amazon EC2 instances based on the Nitro hypervisor feature local NVMe SSD st orage and also expose Amazon Elastic Block Store (Amazon EBS ) volumes as NVMe block devices This is why certain operating system levels are required for Nitro based instances In other words only an AMI that has the required NVMe drives installed allows you to launch a Nitro based instance See AMI and Operating System for instructions on verify ing that your AMI will run on a Nitro based instance If you use EBS volumes on Nitro based instances configure two kernel settings to ensure optimal performance Refer to the NVMe EBS Volumes page of the Amazon Elastic Compute Cloud User Guide for Linux Instances for more information Amazon Elastic File System ( Amazon EFS) You can opt for building your own NFS file server on AWS (discussed in the “Traditional NFS File System” section) or you can launch a shared NFS file system using Amazon Elastic File System ( Amazon EFS) Amazon EFS provides simple scalable NFS based file s torage for use with Amazon EC2 instances in the AWS Cloud A fully managed petabyte scale file system Amazon EFS provides a simple interface that enables you to create and configure file systems quickly and easily With Amazon EFS storage capacity is elastic increasing and decreasing automatically as you add and remove files so your applications have the storage they need when they need it Amazon EFS is designed for high availability and durability and can deli ver high throughput when deployed at scale The data stored on an EFS file system is redundantly stored across multiple Availability Zones In addition a n EFS file system can be accessed concurrently from all Availability Zones in the region where it is l ocated However because all Availability Zones must acknowledge file system actions ( that is create read update or delete) latency can be higher than traditional shared file systems that do not span multiple Availability Zones Because of this it is important to test your workload s at scale to ensure EF S meets your performance requirements Amazon S3 Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere o n the web It is designed to deliver 99999999999% durability and scale to handle millions of ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 21 concurrent requests and grow past trillions of objects worldwide Amazon S3 offerings include following range of storage classes • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard – IA (for i nfrequent access ) for long lived but less frequently accessed data • Amazon Glacier for long term data archiv al Amazon S3 also offers configurable lifecycle policie s for managing your objects so that they are stored cost effectively throughout their lifecycle Amazon S3 is accessed via HTTP REST requests typically through the AWS software development kits (SDKs) or the AWS Command Line Interface (AWS CLI) You can us e the AWS CLI to copy data to and from Amazon S3 in the same way that you copy data to other remote file system s using ls cp rm and sync command line operations For EDA workflows we recommend that you consider Amazon S3 for your primary data storage solution to manag e data uploads and downloads and to provide high data durability For example y ou can quickly and efficiently cop y data from Amazon S3 to Amazon EC2 instances and Amazon EBS storage to populate a high performan ce shared file system prior to launching a large batch regression test or timing analysis However we recommend that you do not use Amazon S3 to directly access (read /write) individual files during the runtime of a performance critical application The be st architectures for high performance data intensive computing available on AWS consist of Amazon S3 Amazon EC2 Amazon EBS and Amazon EFS to balance performance durability scalability and cost for each specific application Traditional NFS File Systems For EDA workflow migration the first and most popular option for migrating storage to AWS is to build systems similar to your onpremise s environment This option enables you to migrate applications quickly without having to rearchitect your applicat ions or workflow With AWS it’s simple to create a storage server by launching an Amazon EC2 instance with adequate bandwidth and Amazon EBS throughput attaching the appropriate EBS volumes and sharing the file system to your compute nodes using NFS When building storage systems for the immense scale that EDA can require for large scale regression and verification tests there are a number of approaches you can take to ensure your storage systems are able to handle the throughput ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 22 The largest Amazon EC2 instances support 2 5 Gbps of network bandwidth and up to 80000 I OPS and 1750 MB/s to Amazon EBS If the data is temporary or scratch data you can use an instance with NVMe volumes to optimize the backend storage For example you can use the i316xl arge with 8 NVMe volumes that is capable of up to 16GB/s and 3M IOPS for local access The 25 Gbps network connection to the i316xlarge then becomes the bottleneck and not the backend storage This setup results in an NFS that is capable of 25 GB/s For EDA workloads that require more performance in aggregate than can be provided by a single instance you can build multiple NFS servers that are delegated to specific mount points Typically this means that you build servers for shared scratch tools directories and individual projects By building servers in this way you can right size the server and the storage allocated to it according to the demands of a specific workload When projects are finished you can archive the data to a low cost long term storage solution like Amazon Glacier Then you can delete the storage server thereby saving additional cost When building the storage servers you have many options Linux software raid ( mdadm ) is often a popular choice for its ubiquity and stability However in recent years ZFS on Linux has grown in popularity and customers in the EDA space use it for the data protection and expansion features that it provides If you use ZFS it’s relatively simple to build a solution that pools a group of EBS volumes together to ensure higher performance of the volume set up automatic hourly snapshots to provide for point in time rollbacks and replicate data to backup servers that are in other Availability Zones to provide for fault tolerance Although out of the scope of this document if you want more automated and managed solutions consider AWS partner storage solutions Examples of partners that provid e solutions for running high performance storage on AWS include SoftN AS WekaIO and NetApp Cloud Nat ive Storage Approaches Because of its low cost and strong scaling behaviors Amazon S3 is wellsuited for EDA workflows because you can adapt the workflows to reduce or eliminate the need for traditional shared storage systems Cloud optimized EDA workflow s use a combination of Amazon EBS storage and Amazon S3 to achieve extreme scalability at very low costs without being bottlenecked by traditional storage systems ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 23 To take advantage of a solution like this your EDA organization and your supporting IT teams might need to untangle many years of legacy tools file system sprawl and large numbers of symbolic links in order to understand what data you need for specific projects (or job deck) and prepackage the data along with the job that requires it The typical first step in this approach is to separate out the static data ( for example application binaries compilers and so on ) from dynamically changing data and IP in order to build a front end workflow that doesn’t re quire any shared file systems This is an important step for optimized cloud migration and also provides the benefit of increasing the scalability and reliability of legacy on premise s EDA workflows By using this less NFS centric approach to manag e EDA storage operating system images c an be regularly updated with static assets so that they’re available when the instance is launched Then when a job is dispatched to the instance it can be configured to first download the dynamic data from Amazon S3 to local or Amazon EBS storage before launching the application When complete results are uploaded back to Amazon S3 to be aggregated and processed when all jobs are finished This method for decoupling compute from storage can provide substantial performance and reliability benefit s in pa rticular for frontend RTL batch regressions Licensing Application licensing is required for most EDA workloads both on premises and on AWS From a technical standpoint managing and accessing licenses is unchanged when migrating to AWS License Server Access On AWS each Amazon EC2 instance launched is provided with a unique hostname and hardware (MAC) address using Amazon elastic network interfaces that cannot be cloned or spoofed Therefore traditional license server technologies ( such as Flexera) work natively on AWS without any modification The inability to clone license servers which is prevented by AWS by not allowing the duplication of MAC addresses also provides EDA software vendors with increased confidence that EDA software can be deployed and used in a secure manner Because of the connectivity options available which include the use of VPNs and AWS Direct Connect you can run your license servers on AWS using an Amazon EC2 instance or within your own data centers By a llowing connectivity through a VPN or AWS Direct Connect between cloud resources and on premise s license servers AWS enables users to ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 24 seamlessly run workloads in any location without having to split licenses and dedicate them to specific groups of compute resourc es Figure 4: License server deployment scenarios Licensed applications are sometimes sensitive to network latency and jitter between the execution host and the license server Although internet based VPN is often a good choice f or connecting to AWS from your corporate datacenter network latency over the Internet can vary affecting performance and reliability of some licensed applications Alternatively a private dedicated connection from your on premises network to the neares t AWS Region using AWS Direct Connect can provide a reliable network connection with consistent latency Improving License Server Reliability License servers are critical components in almost any EDA computing infrastructure A loss of license services can bring engineering work to a halt across the enterprise Hosting licenses in the AWS Cloud can provide improved reliability of license services with the use of a floating elastic network interface (ENI) These ENIs have a fixed immutable MAC address that can be associated with software license keys The implementation of this high availability solution begins with the creation of an ENI that is attached to a license server instance Your license keys are associated with this network interface If a failure is detected on this instance you or your custom automation can detach the ENI and attach it to a standby license server Because the ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 25 ENI maintains its IP and MAC address es network traffic begins flowing to the standby instance as soon as you attach the network interface to the replacement instance This unique capability enables license administrators to provide a level of reliability that can be difficult to achieve using on premises servers in a traditional datacenter This is another exampl e of the benefits of the elastic and programmable nature of the cloud Working with EDA Vendors AWS works closely with thousands of independent software vendors (ISVs) that deliver solutions to customers on AWS using methods that may include software as a service (SaaS ) platform as a service ( PaaS ) customer self managed and bring your own license (BYOL ) models In the semiconductor sector AWS works closely with major vendors of EDA software to help optimize performance scalability cost and applicatio n security AWS can assist ISVs and your organization with deployment best practices as described in this whitepaper EDA vendors that are members of the AWS Partner Network (APN) have access to a variety of tools training and support that are provided directly to the EDA vendor which benefits EDA end customers These Partner Programs are designed to s upport the unique technical and business requirements of APN members by providing them with increased support from AWS including access to AWS partner team members who specialize in design and engineering applications In addition AWS has a growing number of Consulting P artners who can assist EDA vendors and their customers with EDA cloud migration Remote Desktops While the majority of EDA workloads are executed as batch jobs (see Orchestration ) EDA users may at times require direct console access to compute servers or use applications that are graphical in nature For example it might be necessary to view waveforms or step through a simulation to identify and reso lve RTL regression errors o r it might be necessary to view a 2D or 3D graphical representation of results generated during signal integrity analysis Some applications such as printed circuit layout software are inherently interactive and require a high quality low latency user experience There are multiple ways to deploy remote desktops for such applications on AWS You have the option of using open source software such as V irtual Network Computing (VNC) or commercial remote desktop solutions available from AWS partners You can ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 26 also make use of AWS solutions including NICE desktop cloud visualization ( NICE DCV ) and Amazon Work Spaces NICE DCV NICE Desktop Cloud Visualization is a remote visualization technology that enables users to securely c onnect to graphic intensive 3D applications hosted on a n Amazon EC2 instance With NICE DCV you can provide high performance graphics processing to remote users by creating secure client sessions This enables your interactive EDA users to use resource intensive applications with relatively low end client computers by using one or more EC2 instances as remote desktop servers including GPU acceleration of graphics rendered in the cloud In a typical NICE DCV scenario for EDA a graphic intensive applicatio n such as a 3D visualization of an electromagnetic field simulation or a complex interactive schematic capture session is hos ted on a high performance EC2 instance that provides a high end GPU fast I/O capabilities and large amounts of memory The N ICE DCV server software is installed and configured on a server (an EC2 instance) and it is used to create a secure session You use a NICE DCV client to remotely connect to the session and use the application hosted on the server The server uses its hard ware to perform the high performance processing required by the hosted application The NICE DCV server software compresses the visual output of the hosted application and streams it back to you as an encrypted pixel stream Your NICE DCV client receives t he compressed pixel stream decrypts it and then outputs it to your local display NICE DCV was specifically designed for high performance technical applications and is an excellent choice for EDA in particular if you are using Red Hat Enterprise Linux or CentOS operating systems on your remote desktop environment NICE DCV also supports modern Linux desktop environments including modern Linux desktops such as Gnome 3 on RHEL 7 NICE DCV uses the latest NVIDIA Grid SDK technologies such as NVIDIA H264 hardware encoding to improve performance and reduce system load NICE DCV also supports lossless quality video compression when the network and processor conditions allow and it automatically adapts the video compression levels based on the network's available bandwidth and latency ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 27 Amazon Workspaces Amazon WorkSpaces is a managed secure cloud desktop service You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of deskto ps to workers across the globe You can pay either monthly or hourly just for the WorkSpaces you launch which helps you save money when compared to traditional desktops and on premises virtual desktop infrastructure (VDI) solutions Amazon WorkSpaces hel ps you eliminate the complexity in managing hardware inventory OS versions and patches and VDI which helps simplify your desktop delivery strategy With Amazon WorkSpaces your users get a fast responsive desktop of their choice that they can access an ywhere anytime from any supported device Amazon WorkSpaces offers a range of CPU memory and solid state storage bundle configurations that can be dynamically modified so you have the right resources for your applications You don’t have to waste time trying to predict how many desktops you need or what configuration those desktops should be helping you reduce costs and eliminate the need to over buy hardware Amazon WorkSpaces is an excellent choice for organizations wanting to centrally manage remote desktop users and applications and for users that can make use of Windows or Amazon Linux 2 for the remote desktop environment User Authentication User authentication is covered in more detail in the Security and Governance in the AWS Cloud section but AWS offers several options for connecting with an on premises authentication server migrating users to AWS or archit ecting an entirely new authentication solution Orchestration Orchestration refers to the dynamic management of compute and storage resources in an EDA cluster as well as the management (scheduling and monitoring) of individual jobs being processed in a c omplex workflow for example during RTL regression testing or IP characterization For these and many other typical EDA workflows the efficient use of compute and storage resources —as well as the efficient use of EDA software licenses —depends on having a wellorchestrated well architected batch computing environment ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 28 EDA workload management gains new levels of flexibility in the cloud making resource and job orchestration an important consideration for your workload AWS provides a range of solutions fo r workload orchestration: fully managed services enable you to focus more on job requests and output over provisioning configuring and optimizing the cluster and job scheduler while self managed solutions enable you to configure and maintain cloud native clusters yourself leveraging traditional job schedulers to use on AWS or in hybrid scenarios Describing all possible methods of orchestration for EDA is beyond the scope of this document; however it is important to know that the same orchestration meth ods and job scheduling software used in typical legacy EDA environments can also be used on AWS For example commercial and open source job scheduling software can be migrated to AWS and be enhanced by the addition of Auto Scaling (for dynamic resizing of EDA clusters in response to d emand or other triggers) CloudW atch (for monitoring the compute environment for example CPU utilization and server health) and other AWS services to increase performance and security while reducing costs CfnCluster CfnC luster (cloud formation cluster) is a framework that deploys and maintains high performance computing clusters on Amazon Web Services (AWS) Developed by AWS CfnCluster facilitates both quick start proof of concepts (POCs) and production deployments CfnC luster supports many different types of clustered applications including EDA and can easily be extended to support different frameworks CfnCluster integrates easily with existing job scheduling software and can automatically launch servers in response to queue depths and other triggers CfnCluster is also able to launch shared file systems cluster head nodes license servers and others resources CfnCluster is open source and easily extensible for your unique workflow requirements AWS Batch AWS Bat ch is a fully managed service that enables you to easily run large scale compute workloads on the cloud including EDA jobs without having to worry about resource provisioning or managing schedulers Interact with AWS Batch via the web console AWS CLI o r SDKs AWS Batch is an excellent alternative for managing massively parallel workloads ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 29 EnginFrame EnginFrame is an HPC portal that can be deployed on the cloud or on premise EnginFrame is integrated with a wide range of open source and commercial batch scheduling systems and is a o nestop shop for job submission control and data management All of the preceding options (CfnCluster AWS Batch and EnginFrame) as well as partner provided solutions are being successfully deployed by EDA users on AWS Discuss your specific orchestration needs with an AWS technical specialist Optimizing EDA Tools on AWS EDA software tools are critical for modern semiconductor design and verification Increa sing the performance of EDA software —measured both as a function of individual job run times and on the completion time for a complete set of EDA jobs —is important to reduce time toresults/time totapeout and to optimize EDA license costs To this point we have covered the solution components for your architecture on AWS Now in an effort to be more prescriptive we present specific recommendations and configura tion parameters that should help you achi eve expected performance for your EDA tools Choosing the right Amazon EC2 instance type and the right OS level optimizations is critical for EDA tools to perform well This section provides a set of recommendations that are based on actual daily use of ED A software tools on AWS —usage by AWS customers and by Amazon internal silicon design teams The recommendations include such factors as instance type and configuration as well as OS recommendations and other tunings for a representative set of EDA tools These recommendations have been tested and validated internally at AWS and with EDA customers and vendors Amazon EC2 Instance Types The following table highlights EDA tools and provides corresponding Amazon EC2 instance type recommendations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 30 Table 4: EDA tools and corresponding instance type Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB & (GiB/core) Local NVMe Typical EDA Application Z1d 24 40 GHz 384 (16) Y Formal verification RTL Simulation Batch RTL Simulation Interactive RTL Gate Level Simulation R5 / R5d 48 Up to 31 GHz 768 (16) Y (R5d) RTL Simulation Multi Threaded R4 32 23 GHz 488 (1525) RTL Simulation Multi Threaded Place & Route M5 / M5d 48 Up to 31 GHz 384 (16) Y (M5d) Remote Desktop Sessions C5 / C5d 36 Up to 35 GHz 144 (4) Y (C5d) RTL Simulation Interactive RTL Gate Level Simulation X1 64 23 GHz 1952 (305) Y Place & Route Static Timing Analysis X1e 64 23 GHz 3904 (61) Y Place & Route Static Timing Analysis C4 18 29 GHz (boost to 35 ) 60 (333) Formal verification RTL Simulation Interactive *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Operating System Optimization After you have chosen the instance types for your EDA tools you need to customize and optimize your OS to maximize performance Use a Current Generation Operating System If you are running a Nitro based instance you need to use certain operating system levels If you run a Xen based instance instead you should still use one of these OS levels for EDA workloads ( specifically required for ENA and NVMe drivers) : • Amazon Linux or Amazon Linux 2 • Cent OS 74 or 75 • Red Hat Enterprise Linux 74 or 75 ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 31 Disable Hyper Threading On current generation Amazon EC2 instance families (other than the T2 instance family) AWS instances have Intel Hyper Threading Technology (HT Technology) enabled by default You can disable HT Tech nology if you determine that it has a negative impact on your application ’s performance You can run this command to get detailed information about each core (physical core and Hyper Thread) : $ cat /proc/cpuinfo To view cores and the corresponding online Hyper Thread s use the lscpu –extended command For example consider the Z1d2xlarge which has 4 cores with 8 total Hyper Threads If you run the lscpu –extended command before and after disabling Hyper Threading you c an see which threads are online and offline: $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 0 0 0 0:0:0:0 yes 5 0 0 1 1:1:1:0 yes 6 0 0 2 2:2:2:0 yes 7 0 0 3 3:3:3:0 yes $ /disable_htsh $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 ::: no 5 ::: no 6 ::: no 7 ::: no ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 32 Another way to view the vCPUs pairs ( that is Hyper Threads) of each core is to view the thread_siblings_list for each core This list shows two numbers that indicate Hyper Threads for each core To view all thread siblings you can use the following command or substitute “*” with a CPU number: $ cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort un 04 15 26 37 Disable HT Using the AWS feature CPU Options To disable Hyper Threading using CPU Options use the AWS CLI with runinstances and the cpuoptions flag The following is an example with the Z1d12xlarge: $ aws ec2 run instances imageid ami asdfasdfasdfasdf \ instance type z1d12xlarge cpuoptions \ "CoreCount=24ThreadsPerCore=1" keyname My_Key_Name To verify the CpuOptions were set use describeinstances : $ aws ec2 describe instances instance ids i1234qwer1234qwer "CpuOptions": { "CoreCount": 24 "ThreadsPerCore": 1 } Disable HT on a Running System You can run the following script on a Linux instance to disable HT Technology while the system is running This can be set up to run from an init script so that it applies to any instance when you launch the instance See the following example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 33 for cpunum in $(cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | \ sort un | cut s d f2) do echo 0 | sudo tee /sys/devices/system/cpu/cpu${cpunum}/online done Disable HT Using the Boot F ile You can also disable HT Technology by setting the Linux kernel to only initialize the first set of threads by setting maxcpus in GRUB to be half of the vCPU count of the instance For example the maxcpus value for a Z1d12 xlarge instance is 24 to disable Hyper Threading : GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 maxcpus=24 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line When you d isabl e HT Technology it does not change the workload density per server because these tools are demanding on DRAM size and reducing the number of threads only help s as GB/core increases Change C locksource to TSC On previous generation instances that are using the Xen hypervisor consider updating the clocksource to TSC as the default is the Xen pvclock (whi ch is in the hypervisor) To avoid communication with the hypervisor and use the CPU clock instead use tsc as the clocksource The tsc clocksource is not supported on Nitro instances The default kvmclock clocksource on these instance types provides similar performance benefits as tsc on previous generation Xen based instances To change the clocksource on a Xen based instance run the following command : $ sudo su c "echo tsc > /sys/devices/system/cl*/cl*/current_cl ocksource " ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 34 To verify that the clocksource is set to tsc run the following command : $ cat /sys/devices/system/cl*/cl*/current_clocksource tsc You set the clock source in the initialization scripts on the instance You can also verify that the clocksource change d with the dmesg command as shown below : $ dmesg | grep clocksource clocksource: Switched to clocksource tsc Limiting Deeper C states (Sleep State) Cstates control the sleep levels that a core may enter when it is inactive You may want to control C states to tune your system for latency versus performance Putting cores to sleep takes time and although a sleeping core allows more hea droom for another core to boost to a higher frequency it takes time for that sleeping core to wake back up and perform work GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 conso le=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 in tel_idlemax_cstate=1" Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Enable Turbo Mode (Processor State) on Xen Based Instances For our current Nitro based instance types you cannot change turbo mode as this is already set to the optimized value for each instance ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 35 If you are running on a Xen based instance that is using a n entire socket or multiple sockets ( for example r416xlarge r48xlarge c48xlarge) you c an take advantage of the turbo frequency boost especially if you have disabled HT Technology Amazon Linux and Amazon Linux 2 have turbo mode enabled by default b ut other distributions may not To ensure that turbo mode is enabled run the following command: sudo su c "echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo" For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Change to Optimal Spinlock S etting on Xen Based Instances For the instances that are using the Xen hypervisor (not Nitro) you should update the spinlock setting Amazon Linux Amazon Linux 2 and other distributions by default implement a paravirtualized mode of spinlock that is optimized for l owcost preempting virtual machines ( VMs ) This can be expensive from a performance perspective because it causes the VM to slow down when running multithreaded with locks Some EDA tools are not optimized for multi core and consequently rely heavily on sp inlocks Accordingly we recommend that EDA customers disable paravirtualized spinlock on EC2 instances To disable the paravirtualized mode of spinlock on a Xen based instnace add xen_nopvspin=1 to the kernel command line in /boot/grub/grubconf and restart The following is an e xample kernel command : kernel /boot/vmlinuz 44413655amzn1x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 xen_nopvspin=1 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 36 Networking AWS Enhance d Networking Make sure to use enhanced networking for all instances which is a requirement for launching our current Nitro based instances For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Cluster Placement Groups A cluster placement group is a logical grouping of instances within a single Availability Zone Cluster placement groups provide nonblocking non oversubscribed fully bisectional connectivity In other words all instances within the placement group can communicate with all other nodes within the placement group at the full line rate of 10 Gpbs flows and 25 Gpbs aggr egate without any sl owing due to over subscription For more information about placement groups refer to the Placement Groups page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Verify Network Bandwidth One method to ensure you are configuring ENA correctly is to benchmark the instance to instance network performance with iperf3 Refer to Network Throughput Benchmark Linux EC2 for more information Storage Amazon EBS Optimization Make sure to choose your instance and EBS volumes to suit the storage requirements for your workloads Each EC2 instance type has an associated EBS limit and each EBS volume type has limits as well For example the m416xlarge instance type has a io1 volume type with a maximum throughput of 500MB/s NFS Configuration and Optimization Prior to setting up an NFS server on AWS you need to enable Amazon EC2 enhanced networking We recommend using Amazon Linux 2 for your NFS server AMI A crucial part of high performing NFS are the mount parameters on the client For example: ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 37 rsize=1048576wsize=1048576hardtimeo=600retrans=2 A typical EFS mount command is shown in following example : $ sudo mount t nfs4 –o \ nfsvers=41 rsize=1048576wsize=1048576hardtimeo=600retrans=2 \ filesystemidefsaws regionamazonawscom:/ /efs mountpoint When bui lding an NFS server on AWS choose the correct instance size and number of EBS volumes Within a single family larger instanc es typically have more network and Amazon EBS bandwidth available to them The largest NFS servers on AWS are often built using m416xlarge instances with multiple EBS volumes striped together in order to achieve the best possible performance Refer to Appendix A – Optimizing Storage for more information and diagrams for building an NFS server on AWS Kernel Virtual Memory Typical operating system distributions are not tuned for large machines like th ose offered by AWS for EA workloads As result out of the box configurations often have suboptimal performance settings for kernel network buffers and storage page cache background draining While the specific numbers may vary by instance size and applications runs the AW S EDA team has found that these kernel configuration settings and values are a good starting point to optimize memory utilization of the instances : vmmin_free_kbytes=1048576 vmdirty_background_bytes=107374182 Security and Governance in the AWS Cloud The cloud offers a wide array of tools and configurations that enable your organization to protect your data and IP in ways that are difficult to achieve with traditional on premise s environments This section outlines some of the ways you can protect data in the AWS Cloud ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 38 Isolated Environments for Data Protection and Sovereignty Security groups are similar to firewalls —they ensure that access to specific resources is tightly controlled Subnets containing compute and storage resources can be isolated so that they do not have any direct access to the internet Users who need to access the environment must first connect to the Bastian Node (also referred to as a jump box ) through secure protocols like SSH From there they can log into interactive desktops or job schedulers as permitted through your organization ’s security policies Often secure FTP is required in isolated environment s Organization s commonly use secure FTP to download tools from vendors copy completed designs to fabri cation facilities or to update IP from suppliers To do this securely you can set up an FTP client in an isolated subnet that has limited access to external IP addresses as necessary Segment this client from the rest of the network and configure strict controls and monitoring to ensure that everything on that server is secure User Authentication When managing users and access to compute nodes you can adapt the technologies that you use today to work in the same way on AWS Many organizations already h ave existing LDAP Microsoft Active Directory or NIS services that they use for authentication Almost all of these services provide replication and functionality to support multiple data centers With the appropriate network and VPN setup in place you c an manag e these systems on AWS using the same methods and configurations as you do for any remote data center configuration If your organization wants to run an isolated directory on the cloud you have a number of options to choose from If you want to use a managed solution AWS Directory Service for Microsoft Active Directory (Standard) is a popular choice 2 AWS Micros oft AD (Standard Edition) is a managed Microsoft Active Directory (AD) that is optimized for small and midsize businesses (SMBs) Other options include running your own LDAP or NIS infrastructure on AWS and more current solutions like FreeIPA Network AWS employs a number of technologies that allow you to isolate components from each other and control access to the network ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 39 Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications You can easily customize the network configuration for your Amazon VPC For example you can create a public facing subnet for your FTP and Bastian servers that has access to the internet Then you can place your design and engineering systems in a private subnet with no internet access You can leverage multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and leverage the AWS Cloud as an extension of your organization’s data center Security Groups Amazon VPC provides advanced security features such as security groups and network access control lists to enable inbound and outbound filtering at the instance level and subnet level respectively A security group acts as a virtual firewa ll for your instance to control inbound and outbound traffic When you launch an instance in a VPC you can assign the instance to up to five security groups Network access control lists ( ACLs ) control inbound and outbound traffic for your subnets In mo st cases security groups can meet your needs However you can also use network ACLs if you want an additional layer of security for your VPC For more information refer to the Security page in the Amazon Virtual Private Cloud User Guide You can create a flow log on your Amazon VPC or subnet to capture the traffic that flows to and from the network interfaces in your VPC or subnet You can also create a flow log on an individual network interface Flow logs are published to Amazon CloudWatch Logs ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 40 Data Storage and Transfer AWS o ffers many ways to protect dat a both in transit and at rest Many third party storage vendors also offer additional encryption and security technologies in their own implementations of storage in the AWS Cloud AWS Key Management Service ( KMS ) AWS Key Management Service ( KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data In addition it uses Hardware Security Modules (HSMs) to protect the security of your keys AWS KMS is integrated with other AWS services including Amazon EBS Amazon S3 Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail Amazon Relational Database Service (Amazon RDS) and others to help you protect the dat a you store with these services AWS KMS is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs With AWS KMS you can create master keys that can never be exported from the service You use the master keys to encrypt and decrypt data based on policies that you define Amazon EBS Encryption Amazon Elastic Block Store (Amazon EBS ) encryption offers you a simple encryption solution for your EBS volumes requiring you to build maintain and secure your own key management infrastructure When you create an encrypted EBS volume and attach it to a supported instance type the following types of data are encrypted: • Data at rest inside the volume • All data in transit between the volume and the instance • All snapshots c reated from the volume The encryption occurs on the servers that host EC2 instances providing encryption of data in transit from EC2 instances to Amazon EBS storage EC2 Instance Store Encryption The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All encryption keys are destroyed when the instance is stopped or termi nated and cannot be ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 41 recovered You cannot disable this encryption and you cannot provide your own encryption key 1 Amazon S3 Encryption When you u se encryption with Amazon S3 Amazon S3 encrypts your data at the object level Amazon S3 writes the data to disks in AWS data centers and decrypts your data when you access it As long as you authenticate your request and you have access permissions there is no difference in how you access encrypted or unencrypted objects AWS KMS uses customer master keys (CMKs) to encrypt your Amazon S3 objects You use AWS KMS via the Encryption Keys section in the AWS Identity and Access management (AWS IAM) console or via AWS KMS APIs to create encryption keys define the policies that control how keys can be used and audit key usage to ensure that they are used correctl y You can use these keys to protect your data in Amazon S3 buckets Server side encryption with AWS KMS managed keys ( SSEKMS ) provides the following : • You can choose to create and manage encryption keys yourself or you can choose to generate a unique default service key on a customer /service /region level • The ETag in the response is not the MD5 of the object data • The data keys used to encrypt your data are also encrypted and stored alongside the data they protect • You can create rotate and disable auditable master keys in the IAM console • The security controls in AWS KMS can help you meet encryption related compliance requirements If you require server side encryption for all objects that are stored in your bucket Amazon S3 supports bucket policies t hat can be used to enforce encryption of all incoming S3 objects Because access to Amazon S3 is provided over HTTP endpoints you can also leverage bucket policies to ensure that all data transfer in and out occurs over a TLS connection to guarantee that data is also encrypted in transit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 42 Governance and Monitoring AWS provides several services that you can use to enforce governance and monitor your AWS C loud deployment: AWS Identity and Access Management ( IAM) – Enables you to securely control access to AWS services and resources for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources For more information refer to the AWS IAM User Guide Amazon CloudWatch – Enables you to monitor your AWS resources in near real time including EC2 instances EBS volumes and S3 buckets Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also provide CloudWatch access to your own logs or custom application and system metrics such as memory usage transaction volumes or error rates and CloudWatch can monitor these too For more information refer to the Amazon CloudWatch User Guide Amazon CloudWatch Logs – Use to monitor store and access your log files from E C2 instances AWS CloudTrail and other sources You can then retrieve the associated log data from CloudWatch Logs You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting For more information refer to the Amazon CloudWatch Log User Guide AWS CloudTrail – Enables you to l og continuously monitor a nd retain events related to API calls across your AWS infrastructure CloudTrail provides a history of AWS API calls for your account including API calls made through the AWS Management Console AWS SDKs command line tools and other AWS services For mo re information refer to the AWS Cloud Trail User Guide Amazon Macie – Amazon Macie is a security service that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into ho w this data is being accessed or moved The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 43 AWS GuardDuty – Amazon GuardDut y is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise GuardDuty also detects potentially compromised instances or reconnaissance by attackers AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield provides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection AWS Config – Use to assess audit and evaluate the config urations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations For more information refer to the AWS Config Developer Guide AWS Organizations – Offers policy based management for multiple AWS accounts With Organizations you can create Service Control Policies (SCPs) tha t centrally control AWS service use across multiple AWS accounts Organizations helps simplify the billing for multiple accounts by enabling you to setup a single payment method for all the accounts in your organization through consolidated billing You ca n ensure that entities in your accounts can use only the services that meet your corporate security and compliance policy requirements For more information refer to the AWS Organizations User Guide AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application architectures AWS Service Catalog allows you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 44 Contributors The following individuals contributed to this document: • Mark Duffield Worldwide Tech Leader Semiconductors Amazon Web Services • David Pellerin Principal Business Development for Infotech/Semiconductor Amazon Web Services • Matt Morris Senior HPC Solutions Architect Amazon Web Services • Nafea Bshara VP/Distinguished Engineer Amazon Web S ervices Document Revisions Date Description September 2018 2018 update October 2017 First publication ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 45 Appendix A – Optimizing Storage There are many storage options on AWS and some have already been covered at a high level As semiconductor workloads rely on shared storage building an NFS server may be the fi rst step to running EDA tools This section includes two possible NFS architectures that can achieve suitable performance for most workloads NFS Storage NFS server capabl e of 175 GB/s with 75000 IOP S 6 EBS Vol 20K IOPS Each ZFS RAID6 pool using EBS vols 25 Gpbs ENA connection 6 x EBS Provisioned IOPS25 GpbsNFS Clients Running EDA Toolsr416xlarge NFS Server for Tools Project Data etcArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 46 NFS server capable of 25 GB/s and > 100000 IOPS i316xlarge 8 x NVMe Volumes RAID0 Pool with mdadm EXT4 file system 25 Gpbs ENA connection NFS Server for Temporary/Scratch data 25 GpbsNFS Clients Running EDA ToolsArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 47 Appendix B – Reference Architecture The following diagram represents a common architecture for an elastic EDA computing environment in AWS This design provides the f ollowing key infrastructure components: • Amazon EC2 AutoScaling Group for elasticity • AWS Direct Connect for dedicated connectivity to AWS • Amazon Linux WorkSpaces for remote desktops • Amazon EC2 based compute license and scheduler instances • Amazon EC2 based NFS servers and Amazon EFS for sharing file systems between compute instances ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 48 Figure 5: EDA architecture on AWS Corporate Data Center EDA AutoScaling Group Amazon AI Services EFS S3 BucketRemote DesktopInternetHome Office Coffee Shop or Customer Site AWS Direct Connect /tools (NFS) /project (NFS) /scratch (NFS)License ServerJob SubmitArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 49 Appendix C – Updating the Linux Kernel Command Line Update a system with /etc/default/grub file 1 Open the /etc /default/grub file with your editor of choice $ sudo vim /etc/default/grub 2 Edit the GRUB_CMDLINE_LINUX_DEFAULT line and make necessary changes For example: GRUB_CMDLINE_LINUX_DEFAULT="cons ole=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvm e_coreio_timeout=4294967295 intel_idlemax_cstate=1 " 3 Save the file and exit your editor 4 Run the following command to rebuild the boot configuration $ grub2mkconfig o /boot/grub2/grubcfg 5 Reboot your instance to enable the new kernel option $ sudo reboot ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 50 Update a system with /boot/grub/grubcon f file 1 Open the /boot/grub/grubconf file with your editor of choice $ sudo vim /boot/grub/grubconf 2 Edit the kernel line for example (some info removed for clarity) # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz veramzn1x86_64 <other_info> intel_idlemax_cstate=1 initrd /boot/initramfs 314262446amzn1x86_64img 3 Save the file and exit your editor 4 Reboot your instance to enable the new kernel option $ sudo reboot Verify Kernel Line Verify that the setting by running dmesg or /proc/cmdline kernel command line: $ dmesg | grep "Kernel command line" [ 0000000] Kernel command line: root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 $ cat /proc/cmdline root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 1 https://docsawsamazoncom/AWSEC2/latest/UserGuide/ssd instance storehtml 2 http://docsawsamazoncom/directoryservice/latest/admin guide/directory_simple_adhtml Notes
|
General
|
consultant
|
Best Practices
|
Optimizing_Enterprise_Economics_with_Serverless_Architectures
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures September 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All right s reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlContents Introduction 1 Understanding Serverless Architectures 2 Is Serverless Always Appropriate? 2 Serverless Use Cases 3 AWS Serverless Capabilities 6 Service Offerings 6 Developer Support 9 Security 11 Partners 12 Case Studies 13 Serverless Websites Web Apps and Mobile Backends 13 IoT Backends 14 Data Processing 15 Big Data 16 IT Automation 17 Machine Learning 17 Conclusion 18 Contributors 19 Further Reading 19 Reference Architectures 19 Document Revisions 20 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlAbstract This whitepaper is intended to help Chief Information Officers ( CIOs ) Chief Technology Officers ( CTOs ) and senior architects gain insight into serverless architectures and their impact on time to market team agility and IT economics By eliminating idle underutilized servers at the design level and dramatically simplifying cloud based software designs serverless approaches rapidly change the IT landscape This whitepaper covers the basics of serverless approaches and the AWS serverless portfolio It includes several case studies illustrating how existing companies are gaining significant agility and ec onomic benefits from adopting serverless strategi es In addition it describ es how organizations of all sizes can use serverless architectures to architect reactive event based systems and quickly deliver cloud native microservices at a fraction of conventional costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 1 Introduction Many companies are already gaining benefits from running applications in the public cloud including cost savings from pay asyougo billing and improved agility through the use of on demand IT r esources Multiple studies across application types and industries have demonstrated that migrating existing application architectures to the cloud lowers the T otal Cost of Ownership (TCO) and improves time to market 1 Relative to on premises and private cloud solutions the public cloud makes it significantly simpler to build deploy and manage fleets of servers and the applications that run on them The public cloud has established itself as the new normal with double digit year overyear growth since its inception2 However companies today have options beyond classic server or virtual machine (VM) based architectures to take advantage of the public cloud Although the cloud eliminates the need for companies to purchase and maintain their hardware any server based architecture still requires them to architect for scalability and reliability Plus companies need to own the challenges of patching and deploying to those server fleets as their applications evolve Moreover they must scale their server f leets to account for peak load and then attempt to scale them down when and where possible to lower costs —all while protecting the experience of end users and the integrity of internal systems Idle underutilized servers prove to be costly and wasteful R esearchers calculated the average server utilization to be around only 18 percent for enterprises3 Using serverless services developers and architects can design and develop complex application architectures focusing just on business logic without deali ng with the complexity of servers As a result product owners can achieve faster time to market with shorter development deployment and testing cycles In addition the r eduction of server management overheads reduces the TCO which ultimately results in competitive advantages for the companies With significan tly reduced infrastructure costs more agile and focused teams and faster time to market companies that have already adopted serverless approaches are gaining a key adv antage over their competitors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 2 Understanding Serverless Architectures The advantages of the serverless approaches cited above are appealing but what are the considerations for practical implementation ? What separates a serverless application from its conv entional server based counterpart? Serverless uses managed services where the cloud provider handles infrastructure management tasks like capacity provisioning and patching This allows your workforce to focus on business logic serving your customers while minimiz ing infrastructure management configuration operations and idle capacity In addition Serverless is a way to describe the services practices and strategies that enable you to build more agile applications so you can innovate and respond to ch ange faster Serverless applications are designed to run whole or parts of the application in the public cloud using serverless services AWS offers many serverless services in domains like compute storage application integration orchestration and datab ases The serverless model provide s the following advantages compared to conventional server based design: •There is no need to provision manage and monitor the underlying infrastructure All of the actual hardware and platform server software packages are managed by the cloud provider You need to just deploy your application and its configuration •Serverless services have fault tolerance built in by default Serverless applications require minimal configuration and management from the user to achieve high availability •Reduced operatio nal overhead allows your teams to release quickly get feedback and iterate to get to market faster •With a pay forvalue billing model you do not pay for over provisioning and your resource utilization is optimized on your behalf •Serverless applications have built in service integrations so you can focus on building your application instead of configuring it Is Serverless Always Appropriate? Almost all modern application s can be modified to run successfully and in most cases in a more economical and scalable fashion on a serverless platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 3 The choice between serverless and the alternatives do not need to be an all or nothing proposition Individual components could b e run on servers using containers or using serverless architectures within an application stack However here are a few scenarios when serverless may not be the best choice: •When the goal is explicitly to avoid making any changes to existing application architecture •For the code to run correctly fine grained control over the environment is required such as specifying particular operating system patches or accessing low level networking operations •Applications with ultra low latency requirements for all incoming requests •When an on premises application hasn’t been migrated to the public cloud •When certain aspects of the application component don’t fit within the limits of the serverless services for example if a function takes more time to execute than the AWS Lambda function ’s execution timeout limit or the backend application takes more time to run than Amazon API Gateway’s timeout Serverless Use Cases The serverless application model is generic and applies to almost any application from a st artup’s web app to a Fortune 100 company’s stock trade analysis platform Here are several examples: •Data processing – Developers have discovered that it’s much easier to parallelize with a serverless approach4 main ly when triggered through events leadin g them to increasingly apply serverless techniques to a wide range of big data problems without the need for infrastructure management Th ese include map reduce problems high speed video transcoding stock trade analysis and compute intensive Monte Carlo simulations for loan applications •Web applications – Eliminating servers makes it possible to create web applications that cost almost nothing when there is no traffic while simultaneously scaling to handle peak loads even unexpected onesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 4 •Batch process ing – Serverless architectures can be used in a run multi step flow chart like use cases like ETL jobs •IT automation – Serverless functions can be attached to alarms and monitors to provide customization when required Cron jobs (used to schedule and auto mate tasks that need to be carried out periodically) and other IT infrastructure requirements are made substantially simpler to implement by removing the need to own and maintain servers for their use especially when these jobs and condition s are infreque nt or variable in nature •Mobile backends – Serverless mobile backends offer a way for developers who focus on client development to quick ly create secure highly available and perfectly scaled backends without becoming experts in distributed systems desi gn •Media and log processing – Serverless approaches offer natural parallelism making it simpler to process compute heavy workloads without the complexity of building multithreaded systems or manually scaling compute fleets •IoT backends – The ability to bring any code including native libraries simplifies the process of creating cloud based systems that can implement device specific algorithms •Chatbots (including voice enabled assistants) and other webhook based systems – Serverless approaches are perfect for any webhook based system like a chatbot In addition t heir ability to perform actions (like running code) only when needed (such as when a user requests information from a chatbot) makes them a straightforward and typically lower cost approach fo r these architectures For example the majority of Alexa Skills for Amazon Echo are implemented using AWS Lambda •Clickstream and other near real time streaming data processes – Serverless solutions offer the flexibility to scale up and down with the flow of data enabling them to match throughput requirements without the complexity of building a scalable compute system for each application For example w hen paired with technology like Amazon Kinesis AWS Lambda can offer high speed records processing for clickstream analysis NoSQL data triggers stock trade information and moreThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 5 • Machine learning inference – Machine learning models can be hosted on serverless functions to support inference requests eliminating the need for owning or maintaining servers for supporting intermittent inference requests • Content delivery at the edge –By moving serverless event s handing to the edge of the internet developers can take advantage of lower latency and customize retrievals and content fetches quick ly enabling a new spectrum of use cases that are latency optimized based on the client’s location • IoT at the edge – Enabling serverless capabilities such as AWS Lambda functions to run inside commercial residential and hand held Internet of Things (IoT) devices e nables these devices to respond to events in near realtime Devices can take actions such as aggregat ing and filtering data locally perform ing machine learning inference or sending alerts Typically serverless applications are built using a microservices architecture in which an application is separated into independent components that perform discrete jobs These components made up of a compute layer a nd APIs message queues database and other components can be independently deployed tes ted and scaled The ability to scale individual components needing additional capacity rather than entire application s can save substantial infrastructure costs It would allow an application to run lean with minimal idle server capacity without the need for rightsizing activities 5 Serverless applications are a natural fit for microservices because of their decoupled nature Organizations can become more agile by avoiding monolithic designs and architectures because developers can deploy incrementally and replace or upgrade individual components such as the database tier if needed In many cases not all layers of the architecture need to be moved to serverle ss services to reap its benefits For instance simply isolating the business logic of an application to deploy onto the AWS Lambda serverless compute service is all that’s required to reduce server management tasks idle compute capacity and operational overhead immediately Serverless architecture also has significant economic advantages over server based architectures when considering disaster recovery scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 6 For most serverless architectures the price for managing a disaster recovery site is ne ar zero even for warm or hot sites Serverless architectures only incur a charge when traffic is present and resources are being consumed Storage cost is one exception as many applications require readily accessible data Nonetheless serverless archit ectures truly shine when planning disaster recovery sites especially when compared to traditional data centers Running a disaster recovery on premises often doubles infrastructure costs as many servers are idle waiting for disaster to happen Serverless disaster recovery sites can be set up quick ly as well Once serverless architectures have been defined with infrastructure as code using AWS native services such as AWS CloudFormation an entire architecture can be duplicated in a separate region by runni ng a few commands AWS Serverless Capabilities Like any other traditional server and VM based architecture serverless provides core capabilities such as compute storage messaging and more to its users However serverless services are distributed acros s multiple managed services rather than sprea d across software installed virtual machines As a result AWS provides a complete serverless application that require s a broad array of services tools and capabilities spanning storage messaging diagnostics and more Each of these services is available in the developer’s toolbox to build a practical application Service Offerings Since the introduction of Lambda in 2014 AWS has introduced a wide variety of fullymanaged serverless services that enable organizations to create serverless apps that can integrate seamlessly with other AWS services and thirdparty services The launched serverless services include but are not limited to Amazon API Gateway (2015) Am azon EventBridge (2019) and Amazon Aurora Serverless v2 (2020) The pace of innovation has not stopped for individual services as Lambda has had more than 100 major feature releases since its launch 6 Figure 1 illustrates a subset of the components in the AWS serverless platform and their relationships This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 7 Figure 1: AWS serverless platform components AWS’ s serverless offering consists of services that span across all infrastr ucture layers including compute storage and orchestration In addition AWS provides tools needed to author build deploy and diagnose serverless architectures Running a serverless application in production requires a reliable flexible and trustwo rthy platform that can handle the demands of small startups to global worldwide corporations The platform must scale all of an application’s elements and provide end toend reliability Just as with conventional applications helping developers create a nd deliver serverless solutions is a multi dimensional challenge To meet the needs of large scale enterprises across various industries the AWS serverless platform offers the following capabilities through a diverse set of services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 8 • A high performance scalable and reliable serverless compute layer The serverless compute layer is at the core of any serverless architecture such as AWS Lambda or AWS Fargate responsible for running the business logic Because these services are run in response to events simple integration with both first party and third party event sources is essential to making solutions simple to express and enabling them to scale automatically in response to varying workloads In addition serverless architectures eliminate all of the scaling and management code typically required to integrate such systems shifting that operational burden to AWS • Highly available durable and scalable storage layer – AWS offers fully managed storage layers that offload the overhead of ever increasing storage requirements to support the serverless compute layer Instead of manually adding more servers and storage services such as Amazon Aurora Serverless v2 Amazon DynamoDB and Amazon Simple Storage Service (Amazon S3) scal es based on usage and users are only billed for the consumed resources In addition AWS offers purpose built storage services to meet diverse customer needs from DynamoDB for keyvalue storage Amazon S3 for object storage and Aurora Serverless v2 for r elational data storage • Support for loosely coupled and scalable decoupled serverless workloads – As applications mature and grow they become more challenging to maintain or add new features and some transform into monolithic applications As a result they mak e it challenging to implement changes and slow down the development pace What is needed is individual components that are decoupled and can scale independently Amazon Simple Queue Service (Amazon SQS) Amazon Simple Notification Service (Amazon S NS) Amazon EventBridge and Amazon Kinesis enable developers to decouple individual components allowing developers to create and innovate without being dependent on one another In addition these components all being serverless implies that customers are only being billed for the resources that each component is consuming eliminating unnecessary resources being wasted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 9 • Orchestration offer ing state and workflow management – Orchestration and state management are also critical to a serverless platform’s success As companies adopt serverless architectures there is an increased need to orchestrate complex workflows with decoupled components AWS Step Functions is a visual workflow service that satisfies this need It is used to orchestrate AWS services automate business processes and build serverless applications Step Functions manage failures retries parallelization service integration s and observability so developers can focus on higher value business logic Building applications from individual components that perform a discrete function lets you scale easily and change applications quickly Developers can change and add steps withou t writing code enabling your team to evolve your application and innovate faster • Native service integrations between serverless services mentioned above such as Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (Amazon SNS) and Amaz on EventBridge act as application integration services enabling communication between decoupled components within microservices Another benefit of these services is that minimal code is needed to allow interoperability between them so you can focus on building your application instead of configuring it For instance integration between Amazon API Gateway a fully managed service for hosting APIs to a Lambda function can be done without writing any code and simply walking through the AWS console Deve loper Support Providing the right tool and support for developers and architects is essential to boosting productivity AWS Developer Tools are built to work with AWS making it easier for teams to set up and be productive In addition to popular and well known developer tools such as AWS Command Line Interface (AWS CLI) and AWS Software Development Kits (AWS SDKs) AWS also provides various AWS open source and third party web frameworks that simplify serverless application development and deployment This includes the AWS Serverless Application Model (AWS SAM) and AWS Cloud Development Kit (AWS CDK) that allows customers to onboard faster to serverless architectures offloading undifferentiated heavy lifting of managing the infrastructure for your appli cations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 10 This enable s developers to focus on writing code that creates value for their customers In addition AWS provides the following support for developers adopting serverless technologies • A collection of fit forpurpose application modeling framew orks – Application modeling frameworks such as the open specification AWS SAM or AWS CDK enable a developer to express the components that make up a serverless app lication and enable the tools and workflows required to build deploy and monitor those app lications Both frameworks work nicely with the AWS SAM Command Line Interface (AWS SAM CLI) making it easy for them to create and manage serverless applications It also allows developers to build test locally and debug serverless applications then deploy them on AWS It can also create secure continuous integration and deployment (CI/CD) pipelines that follow best practices and integrate with AWS ’ native and third party CI/CD systems • A vibrant developer ecosystem that helps developers discover and apply solutions in a variety of domains and for a broad set of third party systems and use cases Thriving on a serverless platform requires that a company be able to get started quick ly including finding ready made templates for everyday use cases whet her they involve firstparty or third party services These integration libraries are essential to convey successful patterns —such as processing streams of records or implementing webhooks —especially when developers are migrating from server based to serverless architectures7 A closely related need is a broad and diverse ecosystem that surrounds the core platform A large vibrant ecosystem helps developers discover and use solutions from the community an d makes it easy to contribute new ideas and approaches Given the variety of toolchains in use for application lifecycle management a healthy ecosystem is also necessary to ensure that every language Integrated Development Environment (IDE) and enterpri se build technology has the runtimes plugins and open source solutions essential to integrate the building and to deploy ment of serverless app lication s into existing approaches Finally a broad ecosystem provides signific ant acceleration across domains and enables developers to repurpose existing code more readily in a serverless architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 11 Security All AWS customers benefit from a data center and network architecture built to satisfy the requirements of our most security sensitive customers This means that you get a resilient infrastructure designed for high security without a traditional data center’s capital outlay and operational overhead Serverless architecture is no exception To accomplish this AWS’ serverless services offer a broad array of security and access controls including support for virtual private networks role based and access based permissions robust integration with API based authentication and access control mechanisms and support for encrypting application elements such as environment variable settings These outofthebox offered features and services can help developers deploy and publish workloads confidently and reduce time to market Serverless systems by their design also provide s an additional level of sec urity and control for the following reasons: • First class fleet management including security patching – For managed serverless services such as Lambda API Gateway and Amazon SQS the servers that host the services are constantly monitored cycled and s ecurity scanned As a result t hey can be patched within hours of essential security update availability instead of many enterprises ’ compute fleets with much looser service level agreements (SLAs ) for patching and updating • Perrequest authentication access control and auditing – Every request between natively integrated services is individually authenticated authorized to access specified resources and can be fully audited Requests arriving from outside of AWS via Amazon API Gateway provide other internet facing defense systems For example AWS Web Application Firewall (AWS WAF) is a web application firewall that integrates natively with Amazon API Gateway It helps protect hosted APIs against common web exploits and bots that may affect availability compromise security or consume excessive resources including distributed denial ofservice (DDoS) attack defenses In addition c ompanies migrating to serverless architectures can use AWS CloudTrail to gain detailed insight into which users are accessing which systems with what privileges Finally t hey can use AWS tools to process the audit records programmatically This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 12 These security features of serverless help eliminate additional costs often overlooked when calculating the TCO of one’s infrastr ucture Such costs include security and monitoring software licenses installed on servers staffing of information security personnel to ensure that all servers are secure as well as costs associated with regulatory compliance and many others Serverless architecture s also have a smaller blast radius compared to monolithic applications running on virtual machines As AWS takes responsibility of the security of the servers behind the scenes customers can focus on implementing least privilege access between the services Once least privilege access is implemented the blast radius is dramatically reduced The decoupled nature of the architecture will limit the impact to a smaller set of services compared to a scenario where a malicious actor gains a ccess to a n internal server Considering the significant financial impact of a security breach this is also a n added benefit that help enterprises optimize on infrastructure costs Adopting serverless architectures help in reducing or eliminating such expense s that are no longer needed and capital can be repurposed and teams are freed to work on higher value activities Partners AWS has an expansive partner network that assists our customers with building solutions and services on AWS AWS works closely with validated AWS Lambda Partners for building serverless architecture s that help customers develop services and applications without provisioning or managing servers Lambda Partners provide developer tooling solutions validated by AWS serverless experts against the AWS Well Architected Framework Customers can simplify their technology evaluation process and increase purchasing confidence knowing these companies’ solutions have passed a strict AWS validation of security performance and reliability Customers can ultimately reduce time to market with the assistance of qualified partners leveraging serverless technologies For a complete list of AWS Lambda Ready Partners visit our AWS Partner Network page 8 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 13 Case Studies Companies have applied serverless architectures to use cases from stock trade validation to e commerce website construction to natural language processing AWS serverless portfolio offer s the flexibility to create a wi de array of applications including those requiring assurance programs such as PCI or HIPAA compliance The following sections illustrate some of the most common use cases but are not a comprehensive list For a complete list of customer references and us e case documentation see Serverless Computing 9 Serverless Websites Web Apps and Mobile Backends Serverless approaches are ideal for applications where the load can vary dynamically Using a serverless approach means no compute costs are incurred when there is no end user traffic while still offering instant sca le to meet high demand such as a flash sale on an e commerce site or a social media mention that drives a sudden wave of traffic Compared to traditional infrastructure approaches it is also often significantly less expensive to develop deliver and op erate a web or mobile backend when architected in a serverless fashion AWS provides the services developers need to construct these applications rapidly : • Amazon Simple Storage Service (Amazon S3) and AWS Amplify offer a simple hosting solution for static content • AWS Lambda in conjunction with Amazon API Gateway provides support for dynamic API requests using functions • Amazon DynamoDB offers a simple storage solution for the session and peruser state • Amazon Cognito provides an easy way to handle end user registration authentication and access control to resources • Developers can use AWS Serverless Application Model (SAM ) to describe the various elements of an application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 14 • AWS CodeStar can set up a CI/CD toolchain with just a few clicks To learn more see the whitepaper AWS Serverless Multi Tier Architectures which provides a detailed examination of patterns for building serverless web applic ations10 For complete reference architectures see Serverless Reference Architecture for creating a Web Application11 and Serverless Reference Architecture for creating a Mobile Backend12 on GitHub Customer Example – Neiman Marcus A luxury household name Neiman Marcus has a reputation for delivering a first class personalized customer service experience To modernize and enhance that experience the company wanted to develop Connect an omnichannel digital selling application that would empower associates to view rich personalized customer information with the goal of making each customer interaction unforgettable Choos ing a serverless architecture with mobile development solutions on Amazon Web Services (AWS) enabled the development team to launch the app much faster than in the 4 months it had originally planned “Using AWS cloud native and serverless technologies we increased our speed to market by at least 50 percent and were able to accelerate the launch of Connect” says Sriram Vaidyanathan senior director of omni engineering at Neiman Marcus This approach also greatly reduced app building costs and provided dev elopers with more agility for the development and rapid deployment of updates The app elastically scales to support traffic at any volume for greater cost efficiency and it has increase d associate productivity For more information see the Neiman Marcus case study 13 IoT Backends The benefits that a serverless architecture brings to web and mobile apps make it easy to construct IoT backends and device based analytic processing systems that seamlessly scale with the number of devices For an example reference architecture see Serverless Reference Architecture for creating an IoT Backend on GitHub14 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 15 Customer Example – iRobot iRobot which makes robots such as the Roomba cleaning robot uses AWS Lambda in conjunction with the AWS IoT service to create a serverless backend for its IoT platform As a popular gift on any holiday iRobot experienc es increased traffic on these days While h uge traffic spikes could also mean huge headaches for the company and its customers alike iRobot’s engineering team doesn’t have to worry about managing infrastructure or manually writing code to handle availabi lity and scaling by running on serverless This enabl es them to innovate faster and stay focused on customers Watch the AWS re:Invent 2020 video Building the next generation of residential robots for more information 15 Data Processing The largest serverless applications process massive volumes of data much of it in real time Typical serverless data processing architectures use a combination of Amazon Kinesis and AWS Lambda to process streaming d ata or they combine Amazon S3 and AWS Lambda to trigger computation in response to object creation or update events When workloads require more complex orchestration than a simple trigger developers can use AWS Step Functions to create stateful or long running workflows that invoke one or more Lambda functions as they progress To learn more about serverless data processing architectures see the following on GitHub: • Serverless R eference Architecture for Real time Stream Processing16 • Serverless Reference Architecture for Real time File Processing17 • Image Recognition and Processing Backend reference architecture18 Customer Example – FINRA The Financial Industry Regulatory Authority (FINRA) u sed AWS Lambda to build a serverless data processing solution that enables them to perform half a trillion data validations on 37 billion stock market events daily In his talk at AWS re:Invent 2016 entitled The State of Serverless Computing (SVR311) 19 Tim Griesbach Senior Director at FINRA said “We found that Lambda was going to provide us with the best solution for this serverless cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 16 solution With Lambda the system was faster cheaper and more scalable So at the end of the day we’ve reduced our costs by over 50 percent and we can track it daily even hourly ” Customer Example – Toyota Connected Toyota Connected is a subsidiary of Toyota and a technology company offering connected platform s big data mobility services and other automotive related services Toyota Connected chose server less computing architecture to build its Toyota Mobility Services Platform leveraging AWS Lambda Amazon Kinesis Data Streams (Amazon KDS) and Amazon S3 to offer personalized localized and predictive data to enhance the driving experience With its se rverless architecture Toyota Connected seamlessly scaled to 18 times its usual traffic volume with 18 billion transactions per month running through the platform reducing aggregation job times from 15+ hours to 1/40th of the time while reducing operatio nal burden Additionall y serverless enabled Toyota Connected to deploy the same pipeline in other geographies with smaller volumes and only pay for the resources consumed For more information read our Big Data Blog on Toyota Connected or watch the re:Invent 2020 video Reimagining mobility with Toyota Connected (AUT303) 20 21 Big Data AWS Lambda is a perfect match for many highvolume parallel processing workloads For an example of a reference architecture using MapReduce see Reference Architecture for running serverless MapReduce jobs 22 Customer Example – Fannie Mae Fannie Mae a leading source of financing for mortgage lenders uses AWS Lambda to run an “embarrassingly parallel ” workload for its financial modeling Fannie Mae uses Monte Carlo simulation processes to project future cash flows of mortgages that help manage mortgage risk The company found that its existing HPC grids were no longer meeting its growing busi ness needs So Fannie Mae built its new platform on Lambda and the system successfully scaled up to 15000 concurrent function executions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 17 during testing The new system ran one simulation on 20 million mortgages completed in 2 hours which is three times faster than the old system Using a serverless architecture Fannie Mae can run large scale Monte Carlo simulations effectively because it doesn’t pay for idle compute resources It can also speed up its computations by running multiple Lambda functions concurrently Fannie Mae also experienced shorter than typical time tomarket because they were able to dispense with server management and monitoring along with the ability to eliminate much of the complex code previously required to manage application sc aling and reliability See the Fannie Mae AWS Summit 2017 presentation SMC303: Real time Data Processing Using AWS Lambda23 for more information IT Automation Serverless approaches eliminate the overhead of managing servers making most infrastructure tasks including provisioning configuration management alarms/monitors and timed cron jobs easier to create and manage Customer Example – Autodesk Autodesk which makes 3D design and engineering software uses AWS Lambda to automate its AWS account creation and management processes across its engineering organization Autodesk estimates that it realized cost savings of 98 percent (factoring in estimated savings in labor hours spent provisioning accounts) It can now provision accounts in just 10 minutes instead of the 10 hours it took to provision with the previous infrastructure based process The serverless solution enables Autodesk to a utomatically provision accounts configure and enforce standards and run audits with increased automation and fewer manual touchpoints For more information see the Autodesk AWS Summit 2017 presentation SMC301: The State of Serverless Computing 24 Visit GitHub to see the Autodesk Tailor service25 Machine Learning You can use serverless services to capture store and preprocess data before feeding it to your machine learning model After training the model you can also This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 18 serve the model for prediction at scale for inference without providing or managing any infrastr ucture Customer Example – Genworth Genworth Mortgage Insurance Australia Limited is a leading provider of lenders ’ mortgage insurance in Australia Genworth has more than 50 years of experience and data in this industry and wanted to use this historical information to train predictive analytics for loss mitigation machine learning models To achieve this task Genworth built a serverless machine learning pipeline at scale using services like AWS Glue a serverless managed ETL processing service to ingest and transform data and Amazon SageMaker to batch transform jobs and perform ML inference and process and publish the results of the analysis With the ML models Genworth could analyze recent repayment patterns for each insurance policy to prioritize t hem in likelihood and impact for each claim This process was automate d endtoend to help the business make data driven decisions and simplify high value manual work performed by the Loss Mitigation team Read the Machine Learning blog How Genworth built a serverless ML pipeline on AWS using Amazon SageMaker and AWS Glue for more information26 Conclusion Serverless approaches are designed to tackle two classic IT management problems: idle servers and operating fleets of servers that distract and detract from the business of creating differentiated customer value AWS serverless offerings solve these long standing problems with a pay for value billing model and by eliminating the need to manage the underlying infrastructure AWS constantly scans patches and monitors the underlying infrastructure making these applications more secure and provides built in fault tolerance with minimal configuration needed for high availability As a result developers can focus on writing business logic rather than managing infrastructure allowing enterprises to reduce time to market while paying for only the resources co nsumed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 19 Existing companies are gaining significant agility and economic benefits from adopting serverless architectures and e nterprises should consider serverless first strategy for building cloud native microservices To learn more and read whitepapers on related topics see Serverless Computing and Applications 27 Contributors The following individuals and organizations contributed to this document: • Tim Wagner General Manager of AWS Serverless Applicatio ns Amazon Web Services • Paras Jain Technical Account Manager Amazon Web Services • John Lee Solutions Architect Amazon Web Services • Diego Magalh ães Principal Solutions Architect Amazon Web Services Further Reading For additional information see the following: • Architecture Best Practices for Serverless 28 • AWS Ramp Up Guide: Serverless29 Reference Architectures • Web Applications30 • Mobile Backends 31 • IoT Backends32 • File Processing33 • Stream Processing34 • Image Recognition Processing35 • MapReduce36 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 20 Document Revisions Date Description October 2017 First publication September 2021 Content refresh 1 https://wwwperlecom/articles/the costsavings ofcloud computing 40191237shtml 2 https://wwwgartnercom/en/newsroom/press releases/2021 0628gartner saysworldwide iaaspublic cloud services market grew 407percent in2020 3 https://d39w7f4ix9f5s9cloudfrontnet/e3/79/42bf75c94c279c67d777f002051f/ carbon reduction opportunity ofmoving toawspdf 4 Occupy the Clo ud: Eric Jonas et al Distributed Computing for the 99% https://arxivorg/abs/170204024 5 https://awsamazoncom/aws costmanagement/aws costoptimization/right sizing/ 6 https://docsawsamazoncom/lambda/latest/dg/lambda releaseshtml 7 https://serverlesslandcom/patterns 8 https://awsamazoncom/partners 9 https://awsamazoncom/serverless/ 10 https://d0awsstaticcom/whitepapers/AWS_Serverless_Multi Tier_Architecturespdf 11 https://githubcom/awslabs/lambda refarch webapp 12 https://githubcom/awslabs/lambda refarch mobilebackend 13 https://awsamazoncom/solutions/case studies/neimanmarcus case study 14 https://githubcom/awslabs/lambda refarch iotbackend Notes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 21 15 https://wwwyoutubecom/watch?v= 1PDC6UOFtE 16 https://githubcom/awslabs/lambda refarch streamprocessing 17 https://githubcom/awslabs/lambda refarch fileprocessing 18 https://githubcom/awslabs/lambda refarch imagerecognition 19 https://wwwyoutubecom/watch?v=AcGv3qUrRC4&feature=youtube&t=1153 20 https://awsamazoncom/blogs/big data/enhancing customer safety by leveraging thescalable secure andcostoptimized toyota connected data lake/ 21 https://wwwyoutubecom/watch?v=IpuRyJY3B4k 22 https://githubcom/awslabs/lambda refarch mapreduce 23 https://wwwslidesharenet/AmazonWebServices/smc303 realtime data processing using awslambda/28 24 https:/ /wwwslidesharenet/AmazonWebServices/smc301 thestate of serverless computing 75290821/22 25 https://githubcom/alanwill/aws tailor 26 https://awsamazoncom/blogs/machine learning/how genworth builta serverless mlpipeline onawsusing amazon sagemaker andawsglue/ 27 https://awsamazoncom/serverless/ 28 https://awsamazoncom/architecture/serverless/ 29 https://d1awsstaticcom/training andcertification/ramp up_guides/Ramp Up_Guide_Serverlesspdf?svrd_rr1 30 https://githubcom/awslabs/lambda refarch webapp 31 https://githubcom/awslabs/lambda refarch mobilebackend 32 https://githubcom/awslabs/lambda refarch iotbackend 33 https://githubcom/awslabs/lambda refarch fileprocessing 34 https://githubcom/awslabs/lambda refarch streamprocessing 35 https://githubcom/awslabs/lambda refarch imagerecognition 36 https://githubcom/awslabs/lambda refarch mapreduce
|
General
|
consultant
|
Best Practices
|
Optimizing_Multiplayer_Game_Server_Performance_on_AWS
|
Optimizing Multiplayer Game Server Performance on AWS April 201 7 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon EC2 Instance Type Considerations 1 Amazon EC2 Compute Optimized Instance Capabilities 2 Alternative Compute Instance Options 3 Performance Optimization 3 Networking 4 CPU 13 Memory 27 Disk 34 Benchmarking and Testing 34 Benchmarking 34 CPU Performance Analysis 36 Visual CPU Profiling 36 Conclusion 39 Contributors 40 ArchivedAbstract This whitepaper discusses the exciting use case of running multiplayer game servers in the AWS Cloud and the optimizations that you can make to achieve the highest level of performance In this whitepaper we provide you the information you need to take advantage of the Amazon Elastic Compute Cloud (EC2) family of instances to get the peak performance required to successfully run a multiplayer game server on Linux in AWS This paper is intended for technical audiences that have experience tuning and optimizing Linuxbased servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 1 Introduction Amazon Web Services (AWS) provides benefits for every conceivable gaming workload including PC/console single and multiplayer games as well as mobile based socialbased and webbased games Running PC/console multiplayer game servers in the AWS Cloud is particularly illustrative of the success and cost reduction that you can achieve with the cloud model over traditional on premises data centers or colocations Multiplayer game servers are based on a client/server network architecture in which the game server holds the authoritative source of events for all clients (players) Typically after p layers send their actions to the server the server runs a simulation of the game world using all of these actions and sends the results back to each client With Amazon Elastic Compute Cloud (Amazon EC2) you can create and run a virtual server (called an instance ) to host your client/server multiplayer game1 Amazon EC2 provides resizable compute capacity and supports Single Root I/O Virtualization (SRIOV) high frequency processors For the compute family of instances Amazon EC2 will support up to 72 vCPUs (36 physical cores) when we launch the C5 computeoptimized instance type in 2017 This whitepaper discusses how to optimize your Amazon EC2 Linux multiplayer game server to achieve the best performance while maintaining scalability elasticity and global reach We start with a brief description of the performance capabilities of the compute optimized instance family and then dive into optimization techniques for networking CPU memory and disk Finally we briefly cover benchmarking and testing Amazon EC2 Instance Type Considerations To get the maximum performance out of an Amazon EC2 instance it is important to look at the compute options available In this section we discuss the capabilities of the Amazon EC2 compute optimized instance family that make it ideal for multiplayer game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 2 Amazon EC2 Compute Optimized Instance Capabilities The current generation C4 compute optimized instance family is ideal for running yo ur multiplayer game server2 (The C5 instance type announced at AWS re:Invent 2016 will be the recommended game server platform when it launches) C4 instances run on hardware using the Intel Xeon E52666 v3 (Haswell) processor This is a custom processor designed specifically for AWS The following table lists the capabilities of each instance size in the C4 family Instance Size vCPU Count RAM (GiB) Network Performance EBS Optimized: Max Bandwidth (Mbps) c4large 2 375 Moderate 500 c4xlarge 4 75 Moderate 750 c42xlarge 8 15 High 1000 c44xlarge 16 30 High 2000 c48xlarge 36 60 10 Gbps 4000 As the table shows the c48xlarge instance provides 36 vCPUs Since each vCPU is a hyperthread of a full physical CPU core you get a total of 18 physical cores with this instance size Each core runs at a base of 29 GHz but can run at 32 GHz all core turbo (meaning that each core can run simultaneously at 32 GHz even if all the cores are in use ) and at a max turbo of 35 GHz (possible when only a few cores are in use) We recommend t he c44xlarge and c48xlarge instance sizes for running your game server because they get exclusive access to one or both of the two underlying processor sockets respectively Exclusive access guarantees that you get a 32 GHz all core turbo for most workloads The primary exception is for applications running Advanced Vector Extension (AVX) workloads 3 If you run AVX workloads on the c48xlarge instance the best you can expect in most cases is 31 GHz when running three cores or less It is important to test your specific workload to verify the performance you can achieve The following table shows a comparison between the c44xlarge instances and the c48xlarge instances for AVX and nonAVX workloads ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 3 C4 Instance Size and Workload Max Core Turbo Frequency (GHz) All Core Turbo Frequency (GHz) Base Frequency (GHz) C48xlarge – non AVX workload 35 (when fewer than about 4 vCPUs are active) 32 29 C48xlarge – AVX workload ≤ 33 ≤ 31 depending on the workload and number of active cores 25 C44xlarge – non AVX workload 32 32 29 C44xlarge – AVX workload 32 ≤ 31 depending on the workload and number of active cores 25 Alternative Compute Instance Options There are situations for example for some roleplaying games (RPGs) and multiplayer online battle arenas (MOBAs) where your game server can be more memory bound than compute bound In these cases the M4 instance type may be a better option than the C4 instance type since it has a higher memory to vCPU ratio The compute optimized instance family has a higher vCPU to memory ratio than other instance families while the M4 instance has a higher memory to vCPU ratio M4 instances use a Haswell processor for the m410xlarge and m416xlarge size s; smaller sizes use either a Broadwell or a Haswell processor The M4 instance type is similar to the C4 instance type in networking performance and has plenty of bandwidth for game servers Performance Optimization There are many performance options for Linux servers with networking and CPU being the two most important This section documents the performance options that AWS gaming customers have found the most valuable and /or the options that are the most appropriate for running game servers on virtual machines (VMs) The performance options are categorized into four sections: networking CPU memory and disk This is not an allinclusive list of performance tuning options and not all of the options will be appropriate for every gaming workload We strongly recommend testing these settings before implementing them in production ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 4 This section assumes that you are running your instance in a VPC created with Amazon Virtual Private Cloud (VPC)4 that uses an Amazon Machine Image (AMI)5 with a hardware virtual machine (HVM) All of the instructions and settings that follow have been verified on the Amazon Linux AMI 201609 using the 44 233154 kernel but they should work with all future releases of Amazon Linux Networking Networking is one of the most important areas for performance tuning Multiplayer client/server games are extremely sensitive to latency and dropped packets A list of performance tuning options for networking is provided in the following table Performance Tuning Option Summary Notes Links or Commands Deploying game servers close to players Proximity to players is the best way to reduce latency AWS has numerous Regions across the globe List of AWS Regions Enhanced networking Improved networking performance Nearly every workload should benefit No downside Linux /Windows UDP Receive buffers Helps prevent dropped packets Useful when the latency bet ween client and server is high Little downside but should be tested Add the following to /etc/sysctlconf: netcorermem_default = New_Value netcorermem_max = New_Value (Recommend start by doubling the current values set for your system ) Busy polling Reduce latency of incoming packet processing Can increase CPU utilization Add the following to /etc/sysctlconf: netcorebusy_read = New_Value netcore busy_poll = New_Value (Recommend testing a value of 50 first then 100 ) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 5 Performance Tuning Option Summary Notes Links or Commands Memory Helps prevent dropped packets Add the following to /etc/sysctlconf: netipv4udp_mem = New_Value New_Value New_Value (Recommend doubling the current values set for your system) Backlog Helps prevent dropped packets Add the following to /etc/sysctlconf: netcorenetdev_max_backlog= New_Value (Recommend doubling the current values set for your system) Transmit and receive queues Possible performance boost by disabling hyperthreading The following recommendations cover how to reduce latency avoid dropped packets and obtain optimal networking performance for your game servers Deploying Game Servers Close to Players Deploying your game servers as close as possible to your players is a key element for good player experience AWS has numerous Regions across the world which allows you to deploy your game servers close to your players For the most current list of AWS Regions and Availability Zones see https://awsamazoncom/aboutaws/globalinfrastructure/ 6 You can package your instance AMI and deploy it to as many Regions as you choose Customers often deploy AAA PC/ console games in almost every available Region As you determine where your players are globally you can decide where to deploy your game servers to provide the best experience possible Enhanced Networking Enhanced networking is another performance tuning option7 Enhanced networking uses single root I/O virtualization (SRIOV) and exposes the ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 6 network card directly to the instance without needing to go through the hypervisor8 This allows for general ly higher I/O performan ce lower CPU utilization higher packets per second (PPS) performance lower interinstance latencies and very low network jitter The performance improvement provided by enhanced networking can make a big difference for a multiplayer game server Enhanced networking is only available for instances running in a VPC using an HVM AMI and only for certain instance types such as the C4 R4 R3 I3 I2 M4 and D2 These instance types use the Intel 82599 Virtual Function Interface (which uses the “ixgbevf” Linux driver ) In addition the X1 R4 P2 and M416xlarge (and soon the C5) instances support enhanced networking using the Elastic Network Adapter (ENA) The Amazon Linux AMI includes these necessary drivers by default Follow the Linux or Windows instructions to install the driver for other AMIs9 10 It is important to have the latest ixgbevf driver which can be downloaded from Intel’s website 11 The minimum recommended version for the ixgbevf driver is version 2142 To check the driver version running on your instance run the following command: ethtool i eth0 User Datagram Protocol ( UDP ) Most firstperson shooter games and other similar client/server multiplayer games use UDP as the protocol for communication between clients and game servers The following sections lay out four UDP optimizations that can improve performance and reduce the occurrence of dropped packets Receive Buffers The first UDP optimization is to increase the default value for the receive buffers Having too little UDP buffer space can cause the operating system kernel to discard UDP packets resulting in packet loss Increasing this buffer space can be helpful in situations where the latency between the client and server is high The default value for both rmem_default and rmem_max on Amazon Linux is 212992 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 7 To see the current default values for your system run the following commands: cat /proc/sys/net/core/rmem_default cat /proc/sys/net/core/rmem_max A common approach to allocating the right amount of buffer space is to first double both values and then test the performance difference this makes for your game server Depending on the results you may need to decrease or increase these values Note that the rmem_default value should not exceed the rmem_max value To configure these parameters to persist across reboots set the new rmem_default and rmem_max values in the /etc/sysctlconf file: netcorermem_default = New_Value netcorermem_max = New_Value Whenever making changes to the sysctlconf file you should run the following command to refresh the configuration: sudo sysctl p Busy Polling A second UDP optimization is busy polling which can help reduce network receive path latency by having the kernel poll for incoming packets This will increase CPU utilization but can reduce delays in packet processing On most Linux distributions including Amazon Linux busy polling is disabled by default We recommend that you start with a value of 50 for both busy_read and busy_poll and then test what difference this makes for your game server Busy_read is the number of microseconds to wait for packets on the device queue for socket reads while busy_poll is the number of microseconds to wait for packets on the device queue for socket poll and selects Depending on the results you may need to increase the value to 100 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 8 To configure these parameters to persist across reboots add the new busy_read and busy_poll values to the /etc/sysctlconf file: netcorebusy_read = New_Value netcorebusy_poll = New_Value Again run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p UDP Buffers A third UDP optimization is to change how much memory the UDP buffers use for queueing The udp_mem option configures the number of pages the UDP sockets can use for queueing This can help reduce dropped packets when th e network adaptor is very busy This setting is a vector of three values that are measured in units of pages (4096 bytes) The first value called min is the minimum threshold before UDP moderates memory usage The second value called pressure is the memory threshold after which UDP will moderate the memory consumption The final value called max is the maximum number of pages available for queueing by all UDP sockets By default Amazon Linux on the c48xlarge instance uses a vector of 1445727 1927636 2891454 while the c44xlarge instance uses a vector of 720660 960882 1441320 To see the current default value s run the following command: cat /proc/sys/net/ipv4/udp_mem A good first step when experimenting with new values for this setting is to double the values and then test what difference this makes for your game server It is also good to adjust the values so they are multiples of the page size (4096 bytes) To configure these parameters to persist across reboots add the new UDP buffer values to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 9 netipv4udp_mem = New_Value New_Value New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Backlog The final UDP optimization that can help reduce the chance of dropped packets is to increase the backlog value This optimization will increase the queue size for incoming packets for situations where the interface is receiving packets at a faster rate than the kernel can handle On Amazon Linux the default value of the queue size is 1000 To check the default value run the following command: cat /proc/sys/net/core/netdev_max_backlog We recommend that you double the default value for your system and then test what difference this makes for your game server To configure these parameters to persist across reboots add the new backlog value to the /etc/sysctlconf file: netcorenetdev_max_ backlog = New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Transmit and Receive Queues Many game servers put more pressure on the network through the number of packets per second being processed rather than on the overall bandwidth used ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 10 In addition I/O wait can become a bottleneck if one of the vCPUs gets a large volume of interrupt requests (IRQs) Receive Side Scaling (RSS) is a common method used to address these networking performance issues12 RSS is a hardware option that can provide multiple receive queues on a network interface controller (NIC) For Amazon Elastic Compute Cloud (Amazon EC2) the NIC is called an Elastic Network Interface (ENI)13 RSS is enabled on the C4 instance family but changes to the configuration of RSS are not allowed The C4 instance family provides two receive queues for all of the instance sizes when using Linux Each of these queues has a separate IRQ number and is mapped to a separate vCPU Running the command $ ls 1 /sys/class/net/eth0/queues on a c48xlarge instance displays the following queues: $ ls l /sys/class/net/eth0/queues total 0 drwxrxrx 2 root 0 Aug 18 21:00 rx 0 drwxrxrx 2 root root 0 Aug 18 21:00 rx 1 drwxrxrx 3 root root 0 Aug 18 21:00 tx 0 drwxrxrx 3 root root 0 Aug 18 21:00 tx 1 To find out which IRQs are being used by the queues and how the CPU is handling those interrupts run the following command: cat /proc/interrupts Alternatively run this command to output the IRQs for the queues: echo eth0; grep eth0 TxRx /proc/interrupts | awk '{printf " %s\n" $1}' What follows is the reduced output when viewing the full contents of /proc/interrupts on a c48xlarge instance showing just the eth0 interrupts The first column is the IRQ for each queue The last two columns are the process ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 11 information In this case you can see the TxRx0 and TxRx1 are using IRQs 267 and 268 respectively CPU0 CPU23 CPU33 267 634 2789 0 xenpirqmsix eth0TxRx0 268 600 0 2587 xenpirqmsix eth0TxRx1 To verify which vCPU the queue is sending interrupts to run the following commands (replacing IRQ_Number with the IRQ for each TxRx queue): $ cat /proc/irq/ 267/smp_affinity 00000000000000000000000000800000 $ cat /proc/irq/ 268/smp_affinity 00000000000000000000000200000000 The previous output is from a c48xlarge instance It is in hex and needs to be converted to binary to find the vCPU number For example the hex value 00800000 converted to binary is 00000000100000000000000000000000 Counting from the right and starting at 0 you get to vCPU 23 The other queue is using vCPU 33 Because vCPUs 23 and 33 are on different processor sockets they are physically on different nonuniform memory access (NUMA) nodes One issue here is that each vCPU is by default a hyperthread (but in this particular case they are each hyperthreads of the same core) so a performance boost could be seen by tying each queue to a physical core The IRQs for the two queues on Amazon Linux on the C4 instance family are already pinned to particular vCPUs that are on separate NUMA nodes on the c48xlarge instance This default state may be ideal for your game servers However it is important to verify on your distribution of Linux that there are two queues that are configured for IRQs and vCPUs (which are on separate NUMA nodes) On C4 instance sizes other than the c48xlarge NUMA is not an issue since the other sizes only have one NUMA node One option that could improve performance for RSS is to disable hyperthreading If you disable hyperthreading on Amazon Linux then by ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 12 defau lt the queues will be pinned to physical cores (which will also be on separate NUMA nodes on the c48xlarge instance) See the Hyperthreading section in this whitepaper for more information on how to disable hyperthreadi ng If you don’t pin game server processes to cores you could prevent the Linux scheduler from assigning game server processes to the vCPUs (or cores) for the RSS queues To do this you need to configure two options First in your text editor edit the /boot/grub/grubconf file For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add isolcpus=NUMBER at the end of the line where NUMBER is the number of the vCPUs for the RSS queues For example if the queues are using vCPUs 3 and 4 replace NUMBER with “34” # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 ro ot=LABEL=/ console=ttyS0 isolcpu s=NUMBER initrd /boot/initramfs 31426 2446amzn1x86_64img Using isolcpus will prevent the scheduler from running the game server processes on the vCPUs you specify The problem is that it will also prevent irqbalance from assigning IRQs to these vCPUs To fix this you need to use the IRQBALANCE_BANNED_CPUS option to ban all of the remaining CPUs Version 1110 or later of irqbalance on current versions of Amazon Linux prefers the IRQBALANCE_BANNED_CPUS option and will assign IRQs to the vCPUs specified in isolcpus in order to honor the vCPUs specified by IRQBALANCE_BANNED_CPUS Therefore for example if you isolated vCPUs 34 using isolcpus you would then need to ban the other vCPUs on the instance using IRQBALANCE_BANNED_CPUS To do this you need to use the IRQBALANCE_BANNED_CPUS option in the /etc/sysconfig/ir qbalance file This is a 64bit hexadecimal bit mask The best way to find the value would be to write out the vCPUs you want to include in ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 13 this value in decimal format and then convert to hex So in the earlier example where we used isolcpus to exclude vCPUs 34 we would then want to use IRQBALANCE_BANNED_CPUS to exclude vCPUs 1 2 and 514 (assuming we are on a c44xlarge instance) which would be 1111111111100111 in decimal and finally FFE7n when converted to hex Add the following line to the /etc/sysconfig/irqbalance file using your favorite editor: IRQBALANCE_BANNED_CPUS=” FFE7n” The result is that vCPUs 3 and 4 will not be used by the game server processes but will be used by the RSS queues and a few other IRQs used by the system Like everything else all of these values should be tested with your game server to determine what the performance difference is Bandwidth The C4 instance family offers plenty of bandwidth for a multiplayer game server The c44xlarge instance provides high network performance and up to 10 Gbps is achievable between two c48xlarge instances (or other large instance sizes like the m410xlarge) that are using enhanced networking and are in the same placement group 14 The bandwidth provided by both the c44xlarge and c48xlarge instances has been more than sufficient for every game server use case we have seen You can easily determine the networking performance for your workload on a C4 instance compared to other instances in the same Availability Zone other instances in another Availability Zone and most importantly to and from the Internet Iperf is probably one of the best tools for determining network performance on Linux15 while Nttcp is a good tool for Win dows16 The previous links also provide instructions on doing network performance testing Outside of the placement group you need to use a tool like Iperf or Nttcp to determine the exact network performance achievable for your game server CPU CPU is one of the two most important performancetuning areas for game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 14 Performance Tuning Option Summary Notes Links or Commands Clock Source Using tsc as the clock source can improve performance for game servers Xen is the default clocksource on Amazon Linux Add the following entry to the kernel line of the /boot/grub/grubconf file: tsc=reliable clocksource=tsc CState and PState Cstate and P state options are optimized by default except for the C state on the c48xlarge Setting Cstate to C1 on the c48xlarge should improve CPU performance Can only be changed on the c48xlarge Downside is that 35 GHz max turbo will not be available However the 32 GHz all core turbo will be available Add the following entry to the kernel line of the /boot/g rub/grubconf file: intel_idlemax_cstate=1 Irqbalance When not pinning game servers to vCPUs irqbalance can help improve CPU performance Installed and running by default on Amazon Linux Check your distribution to see if this is running NA Hyperthrea ding Each vCPU is a hyperthread of a core Performance may improve by disabl inghyperthrea ding Add the following entry to the kernel line of the /boot/grub/grubconf file: Maxcpus=X (where X is the number of actual cores in the instance) CPU Pinning Pinning the game server process to vCPU can provide benefits in some situations CPU pinning does not appear to be a common practice among game companies "numactl physcpubind $phys_cpu_core membind $associated_numa_node /game_server_executable" Linu x Scheduler There are three particular Linux scheduler configuration options that can help with game servers sudo sysctl w 'kernelsched_min_granularity_ns= New _Value ' (Recommend start by doubling the current value set for your system) sudo sysctl w 'kernelsched_wakeup_granularity_ns= New_Value ' sudo sysctril –w (Recommend start by halving the current value set for your system) 'kernelsched_migration_cost_ns= New _Value ' ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 15 Performance Tuning Option Summary Notes Links or Commands (Recommend start by doubling the current value set for your system) Clock Source A clock source gives Linux access to a timeline so that a process can determine where it is in time Time is extremely important when it comes to multiplayer game servers given that the server is the authoritative source of events and yet each client has its own view of time and the flow of events The kernelorg web site has a good introduction to clock sources17 To find the current clock source: $cat /sys/devices/system/clock source/clocksource0/current_clocksource By default on a C4 instance running Amazon Linux this is set to xen To view the available clock sources: cat /sys/devices/system/clocksource/clocksource0/available_clocksource This list should show xen tsc hpet and acpi_pm by default on a C4 instance running Amazon Linux For most game servers the best clock source option is TSC (Time Stamp Counter) which is a 64bit register on each processor I n most cases TSC is the fastest highestprecision measurement of the passage of time and is monotonic and invariant See this xenorg article for a good discussion about TSC when it comes to XEN virtualization18 Synchronization is provided across all processors in all power states so TSC is considered synchronized and invariant This means that TSC will increment at a constant rate TSC can be accessed using the rdtsc or rdtscp instructions Rdtscp is often a better option than rdtsc since rdtscp takes into account that Intel processors ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 16 sometimes use out oforder execution which can affect getting accurate time readings The recommendation for game servers is to change the clock source to TSC However you should test this thoroughly for your workloads To set the clock source to TSC edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (note that there may be more than one kernel entry you only need to edit the first one) add tsc=reliable clocksource=tsc at the end of the line # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 tsc=reliable clocksource=tsc initrd /boot/initramfs 31426 2446amzn1x86_64img Processor State Control (CStates and PStates) Processor State Controls can only be modified on the c48xlarge instance (also configurable on the d28xlarge m410xlarge and x132xlarge instances )19 C states control the sleep levels that a core can enter when it is idle while Pstates control the desired performance (in CPU frequency) for a core Cstates are idle power saving states while Pstates are execution power saving states Cstates start at C0 which is the shallowest state where the core is actually executing functions and go to C6 which is the deepest state where the core is essentially powered off The default Cstate for the c48xlarge instance is C6 For all of the other instance sizes in the C4 family the default is C1 This is the reason that the 35 GHz max turbo frequency is only available on the c48xlarge instance Some vCPUs need to be in a deeper sleep state than C1 in order for the cores to hit 35 GHz An option on the c48xlarge instance is to set C1 as the deepest Cstate to prevent the cores from going to sleep That reduces the processor reaction latency but also prevents the cores from hitting the 35 GHz Turbo Boost if only a few cores are active; it would still allow the 32 GHz all core turbo Therefore ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 17 you would be trading the possibility of achieving 35 GHz when a few cores are running for the reduced reaction latency Your results will depend on your testing and application workloads If 32 GHz all core turbo is acceptable and you plan to utilize all or most of the cores on the C48xlarge instance the n change the Cstate to C1 Pstates start at P0 where Turbo mode is enabled and go to P15 which represents the lowest possible frequency P0 provides the maximum baseline frequency The default Pstate for all C4 instance sizes is P0 There is really no reason for changing this for gaming workloads Turbo Boost mode is the desirable state The following table describes the C and Pstates for the c44xlarge and c48xlarge Instance size Default Max C State Recommended setting Default PState Recommended setting c44xlarge and smaller 1 1 0 0 c48xlarge 6a 1 0 0 a) Running cat /sys/module/intel_idle/parameters/max_cstate will show the max Cstate as 9 It is actually set to 6 which is the maximum possible value Use turbostat to see the Cstate and max turbo frequency that can be achieved on the c48xlarge instance Again these instructions were tested using the Amazon Linux AMI and only work on the c48xlarge instance but not on any of the other instance sizes in the C4 family First run the following turbostat command to install stress on your system (If turbostat is not installed on your system then install that too) sudo yum install stress The following command stress es two cores (ie two hyperthreads of two different physical cores): sudo turbostat debug stress c 2 t 60 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 18 Here is a truncated printout of the results of running the command: Definitions: AVG_MHz: number of cycles executed divided by time elapsed %Busy: percent of time in "C0" state Bzy_MHz: average clock rate while the CPU was busy (in "c0" state) TSC_MHz: average MHz that the TSC ran during the entire interval The output shows that vCPUs 9 and 20 spent most of the time in the C0 state (%Busy) and hit close to the maximum turbo of 35 GHz (Bzy_MHz) vCPUs 2 and 27 the other hyperthreads of these cores are sitting in C1 C state (CPU% c1) waiting for instructions A frequency close to 35 GHz was achievable because the default Cstate on the c48xlarge instance was C6 and so most of the cores were in the C6 state (CPU%c6) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 19 Next try stressing all 36 vCPUs to see the 32 GHz All Core Turbo: sudo turbostat debug stress c 36 t 60 Here is a truncated printout of the results of running the command: You can see that all of the vCPUs are in C0 for over 99% of the time (%Busy) and that they are all hitting 32 GHz (Bzy_MHz) when in C0 To set the CState to C1 edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add intel_idlemax_cstate=1 at the end of the line to set C1 as the deepest C state for idle cores: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 20 # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 intel_idlemax_cstate=1 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Now rerun the turbostat command to see what changed after setting the Cstate to C1: sudo turbostat debug stress c 2 t 10 Here is a truncated printout of the results of running the command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 21 The output in the table above shows that all of the cores are now at a Cstate of C1 The maximum average frequency of the two vCPUs that were stressed vCPUs 16 and 2 in the example above is 32 GHz (Bzy_MHz) The maximum turbo of 35 GHz is no longer available since all of the vCPUs are at C1 Another way to verify that the Cstate is set to C1 is to run the following command: cat /sys/module/intel_idle/parameters/max_cstate Finally you may be wondering what the performance cost is when a core switches from C6 to C1 You can query the cpuidle file to show the exit latency in microseconds for various Cstates There is a latency penalty each time the CPU transitions between Cstates ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 22 In the default Cstate cpuidle shows that to move from C6 to C0 requires 133 microseconds: $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 C1EHSW 10 C3HSW 33 C6HSW 133 After you change the Cstate default to C1 you can see the difference in CPU idle Now we see that to move from C1 to C0 takes only 2 microseconds We have cut the latency by 131 microseconds by setting the vCPUs to C1 $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 The instructions above are only relevant for the c48xlarge instance For the c44xlarge instance (and smaller instance sizes in the C4 family) the Cstate is already at C1 and all core turbo 32 GHz is available by default Turbostat will not show that the processors are exceeding the base of 29 GHz One problem is that even when using the debug option for turbostat the c44xlarge instance does not show the Avg_MHz or the Bzy_MHz values like in the output shown above for the c48xlarge instance One way to verify that the vCPUs on the c44xlarge instance are hitting the 32 GHz all core turbo is to use the showboost script from Brendan Gregg20 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 23 For this to work on Amazon Linux you need to install the msr tools To do this run these commands: sudo yum groupin stall "Development Tools" wget https://launchpadnet/ubuntu/+archive/primary/+files/msr tools_13origtargz tar –zxvf msr tools_13origtargz sudo make sudo make install cd msrtools_13 wget https://rawgithubusercontentcom/brendangregg/msr cloud tools/master/showboost chmod +x showboost sudo /showboost The output only shows vCPU 0 but you can modify the options section to change the vCPU that will be displayed To show the CPU frequency run your game server or use turbostat stress and then run the showboost command to view the frequency for a vCPU Irqbalance Irqbalance is a service that distributes interrupts over the cores in the system to improve performance Irqbalance is recommended for most use cases except where you are pinning game servers to specific vCPUs or cores In that case disabling irqbalance may make sense Please test this with your specific workloads to see if there is a difference By default irqbalance is running on the C4 instance family To check if irqbalance is running on your instance run the following command: sudo service irqbalance status Irqbalance can be configured in the /etc/sysconfig/irqbalance file You want to see a fairly even distribution of interrupts across all the vCPUs You can view the status of interrupts to see if they are properly being distributed across vCPUs by running the following command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 24 cat /proc/interrupts Hyperthreading Each vCPU on the C4 instance family is a hyperthread of a physical core Hyperthreading can be disabled if you determine that this has a detrimental impact on the performance of your application However many gaming customers do not find a need to disable hyperthreading The table below shows the number of physical cores in each C4 instance size Instance Name vCPU Count Physical Core Count c4large 2 1 c4xlarge 4 2 c42xlarge 8 4 c44xlarge 16 8 c48xlarge 36 18 All of the vCPUs can be viewed by running the following: cat /proc/cpuinfo To get more specific output you can use the following: egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo In this output the “processor” is the vCPU number The “physical id” shows the processor socket ID For any C4 instance other than the c48xlarge this will be 0 The “core id” is the physical core number Each entry that has the same “physical id” and “core id” will be hyperthreads of the same core Another way to view the vCPUs pairs (ie hyperthreads) of each core is to look at the thread_siblings_list for each core This will show two numbers that are ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 25 the vCPUs for each core Change the X in “cpuX” to the vCPU number that you want to view cat /sys/devices/system/cpu/cpu X/topology/thread_siblings_list To disable hyperthreading edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add maxcpus=NUMBER at the end of the line where NUMBER is the number of actual cores in the C4 instance size you are using Refer to the table above on the number of physical cores in each C4 instance size # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 maxcpus=18 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Again this is one of those settings that you should test to determine if it provides a performance boost for your game This setting would likely need to be combined with CPU pinning before it would provide any performance boost In fact disabling hyperthreading without using pinning may degrade performance Many major AAA games running on AWS do not actually disable hyperthreading If there is no performance boost you can avoid this setting to avoid the administrative overhead of having to maintain this on each of your game servers CPU Pinning Many of the game server processes we see usually have a main thread and then a few ancillary threads Pinning the process for each game server to a core ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 26 (either a vCPU or physical core) is definitely an option but not a configuration we often see Usually pinning is done in situations where the game engine truly needs exclusive access to a core Often game companies simply allow the Linux scheduler to handle this Again this is something that should be tested but if the performance is sufficient without pinning it can save you administrative overhead to not have to worry about pinning As will be discussed in the NUMA section you can pin a process to both a CPU core and a NUMA node by running the following command (replacing the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name ): “numactl – physcpubind $phys_cpu_core –membind $associated_numa_node /game_server_executable ” Linux Scheduler The default Linux scheduler is called the Completely Fair Scheduler (CFS) 21 and it is responsible for executing processes by taking care of the allocation of CPU resources The primary goal of CFS is to maximize utilization of the vCPUs and in turn provide the best overall performance If you don’t pin game server processes to a vCPU then the Linux scheduler assigns threads for these processes There are a few parameters for tuning the Linux scheduler that can help with game servers The primary goal of the three parameters documented below is to keep tasks on processors as long as reasonable given the activity of the task We focus on the scheduler minimum granularity the scheduler wakeup granularity and the scheduler migration cost values To view the default value of all of the kernelsched options run the following command: sudo sysctl A | grep v "kernelsched _domain" | grep "kernelsched" The scheduler minimum granularity value configures the time a task is guaranteed to run on a CPU before being replaced by another task By default ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 27 this is set to 3 ms on the C4 instance family when running Amazon Linux This value can be increased to keep tasks on the processors longer An option would be to double this setting this to 6 ms Like all other performance recommendations in this whitepaper these settings should be tested thoroughly with your game server This and the other two scheduler commands do not persist the setting across reboots so it needs to be done in a startup script: sudo sysctl w 'kernelsched_min_granularity_ns= New_Value The scheduler wakeup granularity value affects the ability of tasks being woken to replace the current task running The lower the value the easier it will be for the task to force removal By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option of halving this value to 2 ms and testing the result Further reductions may also improve the performance of your game server sudo sysctl w 'kernelsched_ wakeup_granularity_ns= New_Value ' The scheduler migration cost value sets the duration of time after a task ’s last execution where the task is still considered “cache hot” when the scheduler makes migration decisions Tasks that are “cache hot” are less likely to be migrated which helps reduce the possibility the task will be migrated By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option to double this value to 8 ms and test sudo sysctril – w 'kernelsched_migration_cost_ns= New_Value ' Memory It is important that any customers running game servers on the c48xlarge instance pay close attention to the NUMA information Performance Tuning Option Summary Notes Links or Commands NUMA On the c48xlarge NUMA can become None of the C4 instance size s There are three options to deal with NUMA: CPU ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 28 Performance Tuning Option Summary Notes Links or Commands an issue since there are two NUMA nodes smaller than the c48xlarge will have NUMA issues since they all have one NUMA node pinning NUMA balancing and the numad process Virtual Memory A few virtual memory tweaks can provide a performance boost for some game servers Add the following to /etc/sysctlconf: vmswappiness = New_Value (Recommend start by halving the current value set for your system) Add the following to /etc/sysctlconf: vmdirty_ratio = New_Value (Recommend going with the default value of 20 on Amazon Linux) Add the following to /etc/sysctlconf: vmdirty_background_ratio = New_Value (Recommend going with the default value of 10 on Amazon Linux) NUMA All of the current generation EC2 instances support NUMA NUMA is a memory architecture used in multiprocessing systems that allows threads to access both the local memory memory local to other processors or a shared memory platform The key concern here is that the remote memory usage provides much slower access than the local memory There is a performance penalty when a thread access es remote memory and there are issues with interconnect contention For an application that is not able to take advantage of NUMA you want to ensure that the processor only uses the local memory as much as possible This is only an issue for the c48xlarge instance because you have access to two processor sockets that each represent a separate NUMA node NUMA is not a concern on the smaller instances in the C4 family since you are limited to a ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 29 single NUMA node In addition the NUMA topology will remain fixed for the lifetime of an instance The c48xlarge instance has two NUMA nodes To view details on these nodes and the vCPUs that are associated with each node run the following command: numactl hardware To view the NUMA policy settings run: numactl show You can also view this information in the following directory (just look in each of the NUMA node directories): /sys/devices/system/node Use the numastat tool to view perNUMAnode memory statistics for processes and the operating system The –p option allows you to view this for a single process while the –v option provides more verbose data numastat p process_name numastat – v CPU Pinning There are three recommended options to address potential NUMA performance issues The first is to use CPU pinning the second is automatic NUMA balancing and the last is to use numad These options should be tested to determine which provides the best performance for your game server First we will look at CPU pinning This involves binding the game server process both to a vCPU (or core) and to a NUMA node You can use numactl to do this Change the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name in the following command for ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 30 each game server running on the instance See the numactl man page for additional options22 numactl physcpubind= $phys_cpu_core membind=$associated_numa_node game_server _executable Automatic NUMA Balancing The next option is to use automatic NUMA balancing This feature attempts to keep the threads or processes in the processor socket where the memory that they are using is located It also tries to move application data to the processor socket for the tasks accessing it As of Amazon Linux Ami 201603 automatic NUMA balancing is disabled by default23 To check if automatic NUMA balancing is enabled on your instance run the following command: cat /proc/sys/kernel/numa_balancing To permanently enable or disable NUMA balancing set the Value parameter to 0 to disable or 1 to enable and run the following command: sudo sysctl w 'kernelnuma_balancing=Value ' echo 'kernelnuma_balancing = Value ' | sudo tee /etc/sysctld/50 numabalancingconf Again these instructions are for Amazon Linux Some distributions may set this in the /etc/sysctlconf file Numad Numad is the final option to look at Numad is a daemon that monitors the NUMA topology and works to keep processes on the NUMA node for the core It is able to adjust to changes in the system conditions The article Mysteries of NUMA Memory Management Revealed explains the performance differences between automatic NUMA balancing and numad24 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 31 To use numad you need to disable automatic NUMA balancing first To install numad on Amazon Linux visit the Fedora numad site and then download the most recent stable commit25 From the numad directory run the following commands to install numad: sudo yum groupinstall "Development Tools" wget https://gitfedorahostedorg/cgit/numadgit/snapshot/numad 05targz tar –zxvf numad 05targz cd numad 05 make sudo make install The logs for numad can be found in /var/log/numadlog and there is a configuration file in /etc/numadconf There are a number of ways to run numad The numad –u option sets the maximum usage percentage of a node The default is 85% The recommended setting covered in the Mysteries of NUMA article is –u100 so this setting would configure the maximum to 100% This forces processes to stay on the local NUMA node up to 100% of their memory requirement sudo numad –u100 Numad can be terminated by using the following command: sudo /usr/bin/nu mad –i0 Finally disabling NUMA completely is not a good choice because you will still have the problem with remote memory access so it is better to work with the NUMA topology F or the c48xlarge instance we recommend taking some action for most game servers We recommend testing the available options that we discussed to determine which provides the best performance While none of these options may eliminate memory calls to the remote NUMA node for a process they each should provide a better experience for your game server ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 32 You can test how well an option is doing by running your game servers on the instance and using the following command to see if there are any numa_foreign (ie memory allocated to the other NUMA node but meant for this node) and numa_miss (ie memory allocated to this node but meant for the other NUMA node) entries: numastat v A more general way to test for NUMA issues is to use a tool like stress and then run numastat to see if there are foreign/miss entries: stress vm bytes $(awk '/MemFree/{printf "%d \n" $2 * 0097;}' < /proc/meminfo)k vmkeep m 10 Virtual Memory There are also a few virtual memory tweaks that we see customers use that may provide a performance boost Again these should be tested thoroughly to determine if they improve the performance of your game VM Swappiness VM Swappiness controls how the system favors anonymous memory or the page cache Low values reduce the occurrence of swapping processes out of memory which can decrease latency but reduce I/O performance Possible values are 0 to 100 The default value on Amazon Linux is 60 The recommendation is to start by halving that value and then testing Further reductions in the value may also help your game server performance To view the current value run the following command: cat /proc/sys/vm/swappiness To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 33 vmswappiness = New_Value VM Dirty Ratio VM Dirty Ratio forces a process to block and write out dirty pages to disk when a certain percentage of the system memory becomes dirty The possible values are 0 to 100 The default on Amazon Linux is 20 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/ dirty_ratio To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: vmdirty_ratio = New_Value VM Dirty Background Ratio VM Dirty Background Ratio forces the system to start writing data to disk when a certain percentage of the system memory becomes dirty Possible values are 0 to 100 The default value on Amazon Linux is 10 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/dirty_background_ratio To configure this parameter to persist across reboots add the following with the recommended value to the /etc/sysctlconf file: dirty_background_ratio= 10 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 34 Disk Performance tuning for disk is the least critical because disk is rarely a bottleneck for multiplayer game servers We have not seen customers experience any disk performance issues on the C4 instance family The C4 instance family only uses Amazon Elastic Block Store (EBS) for storage with no local instance storage; so C4 instances are EBSoptimized by default26 Amazon EBS can provide up to 48000 IOPS if needed You can take standard disk performance steps such as using a separate boot and OS/game EBS volume Performance Tuning Option Summary Notes Links or Commands EBS Performance C4 instances are EBSoptimized by default IOPS can be configured to fit the requirements of the game server NA Benchmarking and Testing Benchmarking There are many ways to benchmark Linux One option you may find useful is the Phoronix Test Suite 27 This open source Pythonbased suite provides a large number of benchmarking (and testing) options You can run tests against existing benchmarks to compare results after successive tests You can upload the results to OpenBenchmarkingorg for online viewing and comparison28 There are many benchmarks available and most can be found on the OpenBenchmarkingorg tests site 29 Some tests that can be useful for benchmarking in preparation for a game server are the cpu30 multicore 31 processor 32 and universe tests33 These tests usually involve multiple subtests Be aware that some of the subtests available may not be available for download or may not run properly To get started you need to install the prerequisites first: sudo yum groupi nstall "Development Tools" y sudo yum install php cli php xml –y ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 35 sudo yum install {libaiopcrepopt} devel glibc {develstatic} y Next download and install Phoronix: wget https://githubcom/phoronix testsuite/phoronix test suite/archive/masterzip unzip masterzip cd phoronix testsuitemaster /install sh ~/directory ofyourchoice/phoronix tester To install a test run the following from the bin subdirectory of the directory you specified when you ran the installsh command: phoronix testsuite install <test or suite name> To install and run a test use: phoronix testsuite benchmar k <test or suite name> You can choose to have the results uploaded to Openbenchmarkorg This option will be displayed at the beginning of the test If you choose “yes” you can name the test run At the end a URL will be provided to view all the test results Once the results are uploaded you can rerun a benchmark using the benchmark result number of previous tests so the results are displayed sidebyside with previous results You can repeat this process to display the results of many tests together Usually you would want to make small changes and the rerun the benchmark You can also choose not to upload the test results and instead view them in the command line output phoronix testsuite benchmark TEST RESULTNUMBER ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 36 The screenshot below shows an example of the output displayed on OpenBenchmarkingorg for a set of multicore benchmark tests run on the c48xlarge instance: CPU Performance Analysis One of the best tools for CPU performance analysis or profiling is the Linux perf command 34 Using this command you can record and then analyze performance data using perf record and perf report respectively Performance analysis is beyond the scope of this whitepaper but a couple of great resources are the kernelorg wiki and Brendan Gregg ’s perf resources 35 The next section describes how to produce flame graphs using perf to analyze CPU usage Visual CPU Profiling A common issue that comes up during game server testing is that while multiple game servers are running (often unpinned to vCPUs) one vCPU will hit near 100% utilization while the other vCPUs will show low utilization Troubleshooting this type of performance problem and other similar CPU issues can be a complex and timeconsuming process The process basically involves looking at the function running on the CPUs and finding the code paths that are the most CPU heavy Brendan Gregg’s flame graphs allow you to visually analyze and troubleshoot potential CPU performance issues36 Flame graphs ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 37 allow you to quickly and easily identify the functions used most frequentl y during the window visualized There are multiple types of flame graphs including graphs for memory leaks but we will focus on CPU flame graphs 37 We will use the perf command to generate the underlying data and then the flame graphs to create the visualization First install the prerequisites: # Install Perf sudo yum install perf # Remove the need to use root for running perf record sudo sh c 'echo 0 >/proc/sys/kernel/perf_event_paranoid' # Download Flame graph wget https://githubcom/brendangregg/FlameGraph/archive/masterzip # Finally you need to unzip the file that was dow nloaded This will create a directory called FlameGraph master where the flame graph executables are located unzip masterzip To see interesting data in the flame graph you either need to run your game server or a CPU stress tool Once that is running you run a perf profile recording You can run the perf record against all vCPUs against specific vCPUs or against particular PIDs Here is a table of the various options: Option Notes F Frequency for the perf record 99 Hz is usually sufficient for most use cases g Used to capture stack traces (as opposed to on CPU function or instructions) C Used to specify the vCPUs to trace a Used to specify that all vCPUs should be traced sleep Specified the number of seconds for the perf record to run ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 38 The following are the common commands for running a perf record for a flame graph depending on whether you are looking at all the vCPUs or just one Run these commands from the FlameGraphmaster directory: # Run perf record on all vCPUs perf record F 99 a g sleep 60 # Run perf record on specific vCPUs specified by number after the –C option perf record F 99 C CPU_NUMBER g sleep 60 When the perf record is complete run the following commands to produce the flame graph: # Create perf file When you run this you will get an error about “no symb ols found” This can be ignored since we are generating this for flame graphs perf script > outperf # Use the stackcollapse program to fold stack samples into single lines /stackcollapse perfpl outperf > outfolded # Use flamegraphpl to render a SVG /flamegraphpl outfolded > kernelsvg Finally use a tool like WinSCP to copy the SVG file to your desktop so you can view it Below are two examples of flame graphs The first was produced on a c48xlarge instance for 60 seconds while sysbench was running using the following options (for each in 1 2 4 8 16; do sysbench test=cpu cpumaxprime=20000 num threads=$each run; done) You can see how little of the total CPU processing on the instance was actually devoted to sysbench You can hover over various elements of the flame graphs to get additional details about the number of samples and percentage spent for each area ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 39 The second graph was produced on the same c48xlarge instance for 60 seconds while running the following script: (fulload() { dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/ze ro of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd) The output presents a more interesting set of actions taking place under the hood: Conclusion The purpose of this whitepaper is to show you how to tune your EC2 instances to optimally run game servers on AWS It focuses on performance optimization of the network CPU and memory on the C4 instance family when running game servers on Linux Disk performance is a smaller concern because disk is rarely a bottleneck when it comes to running game servers This whitepaper is meant to be a central compendium of information on the compute instances to help you run your game servers on AWS We hope this guide saves you a lot of time by calling out key information performance ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 40 recommendations and caveats to get up and running quickly using AWS in order to make your game launch as successful as possible Contributors The following individuals and organizations contributed to this document: Greg McConnel Solutions Architect Amazon Web Services Todd Scott Solutions Architect Amazon Web Services Dhruv Thukral Solutions Architect Amazon Web Services ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 41 1 https://awsamazoncom/ec2/ 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/c4instanceshtml 3 https://enwikipediaorg/wiki/Advanced_Vector_Extensions 4 https://awsamazoncom/vpc/ 5 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 6 https://awsamazoncom/aboutaws/globalinfrastructure/ 7 https://awsamazoncom/ec2/faqs/#Enhanced_Networking 8 https://enwikipediaorg/wiki/Singleroot_input/output_virtualization 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 10 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/enhanced networkingwindowshtml 11 https://downloadcenterintelcom/download/18700/NetworkAdapter VirtualFunctionDriverfor10GigabitNetworkConnections 12 https://wwwkernelorg/doc/Documentation/networking/scalingtxt 13 http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingenihtml 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/placement groupshtml 15 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarklinuxec2/ 16 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarkwindows ec2/ 17 https://wwwkernelorg/doc/Documentation/timers/timekeepingtxt 18 https://xenbitsxenorg/docs/43testing/misc/tscmodetxt 19 http://docsawsamazoncom/AWSEC2/latest/UserGuide/processor_state_co ntrolhtml 20 https://rawgithubusercontentcom/brendangregg/msrcloud tools/master/showboost 21 https://enwikipediaorg/wiki/Completely_Fair_Scheduler Notes ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 42 22 http://linuxdienet/man/8/numactl 23 https://awsamazoncom/amazonlinuxami/201603releasenotes/ 24 http://rhelblogredhatcom/2015/01/12/mysteries ofnumamemory managementrevealed/#more599 25 https://gitfedorahostedorg/git/numadgit 26 https://awsamazoncom/ebs/ 27 http://wwwphoronixtestsuitecom/ 28 http://openbenchmarkingorg/ 29 http://openbenchmarkingorg/tests/pts 30 http://openbenchmarkingorg/suite/pts/cpu 31 http://openbenchmarkingorg/suite/pts/multicore 32 http://openbenchmarkingorg/suite/pts/processor 33 http://openbenchmarkingorg/suite/pts/universe 34 https://perfwikikernelorg/indexphp/Main_Page 35 http://wwwbrendangreggcom/perfhtml 36 http://wwwbrendangreggcom/flamegraphshtml 37 http://wwwbrendangreggcom/FlameGraphs/cpuflamegraphshtml Archived
|
General
|
consultant
|
Best Practices
|
Optimizing_MySQL_Running_on_Amazon_EC2_Using_Amazon_EBS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlOptimizing MySQL Running on Amazon EC2 Using Amazon EBS First Published November 2017 Updated December 7 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlContents Introduction 1 Terminology 1 MySQL on AWS deployment options 2 Amazon EC2 block level storage options 3 EBS volume features 4 EBS monitoring 4 EBS durability and a vailability 4 EBS snapshots 4 EBS security 5 Elastic Volumes 6 EBS volume types 6 General Purpose SS D volumes 6 Provisioned IOPS SSD (io1) volumes 7 MySQL considerations 8 Caching 8 Database writes 9 MySQL read replica configuration 9 MySQL replication considerations 10 Switching from a physical environment to AWS 11 MySQL backups 12 Backup methodologies 12 Creating snapshots of an EBS RAID array 15 Monitoring MySQL and EBS volumes 16 Latency 16 Throughput 18 MySQL benchmark observations and considerations 19 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlThe test environment 19 Tuned compared to default configuration parameter testing 21 Comparative analysis of different storage types 22 Conclusion 25 Contributors 26 Further reading 26 Document revisions 26 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAbstract This whitepaper is intended for Amazon Web Services ( AWS ) customers who are considering deploying their MySQL database on Amazon Elastic Compute Cloud (Amazon EC2) using Amazon Elastic Block Store ( Amazon EBS) volumes This whitepaper describes the features of EBS volumes and how they can affect the security availability durability cost and performance of MySQL databases There are many deployment options and configurations for MySQL on Amazon EC2 This whitepaper provide s performance benchmark metrics and general guidance so AWS customers can make an informe d decision about whether to deploy their MySQL workloads on Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 1 Introduction MySQL is one of the world’s most popular open source relational database engine s Its unique storage architecture provides you with many different ways of customizing database configuration according to the needs of your application It supports transaction processing and high volume operations Apart from the robustness of the database engine another benefit of MySQL is that the total cost of owner ship is low Several companies are moving their MySQL workloads into the cloud to extend the cost and performance benefits AWS offers many compute and storage options that can help you optimize your MySQL deployments Terminology The following definitions are for the common terms that will be referenced throughout this paper: • IOPS — Input/output (I/O) operations per second (Ops/s) • Throughput — Read/write transfer rate to storage (MB/s) • Latency — Delay between sending an I/O request and receiving an acknowledgment (ms) • Block size — Size of each I/O (KB) • Page size — Internal basic structure to organize the data in the database files (KB) • Amazon Elastic Block Store (Amazon EBS) volume — Persistent block level storage devices for use with Amazon Elastic Compute Cloud (Amazon EC2) instances This white paper focus es on solid state drive (SSD) EBS volume types optimized for transactional workloads involving frequent read/write operations with small I/O size where the dominant performance attribute is IOPS • Amazon EBS General Purpose SSD volume — General Purpose SSD volume that provides a balance of price and performance AWS recommend s these volumes for most workloads Currently AWS offer two types of General Purpose SSD volumes : gp2 and gp3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 2 • Amazon EBS Provisioned IOPS SSD volume — Highest performance SS D volume designed for high performance for mission critical low latency or high throughput workloads Currently AWS offer two types of Provisioned IOPS SSD volumes : io1 and io2 • Amazon EBS Throughput Optimized hard disk drive ( HDD ) (st1) volume — Low cost HDD volume designed for frequently accessed throughput intensive workloads MySQL on AWS deployment options AWS provides various options to deploy MySQL like a fully managed database service Amazon Relational Database Service (Amazon RDS) for MySQL The Amazon Aurora database engine is designed to be wire compatible with MySQL 56 and 57 using the InnoDB storage engine You can also host MySQL on Amazon EC2 and self manage the database or browse the thirdparty MySQL offerings on the AWS Marketplace This whitepaper explores the implementation and deployment considerations for MySQL on Amazon EC2 using Amazon EBS for storage Although Amazon RDS and Amazon Aurora with MySQL compatibility is a good choice for most of t he use cases on AWS deployment on Amazon EC2 m ight be more appropriate for certain MySQL workloads With Amazon RDS you can connect to the database itself which gives you access to the familiar capabilities and configurations in MySQL; however access to the operating system (OS) isn’t available This is an issue when you need OS level access due to specialized configurations that rely on low level OS settings such as when using MySQL Enterprise tools For example enabling MySQL Enterprise Monitor requi res OS level access to gather monitoring information As another example MySQL Enterprise Backup requires OS level access to access the MySQL data directory In such cases running MySQL on Amazon EC2 is a better alternative MySQL can be scaled vertically by adding additional hardware resources (CPU memory disk network) to the same server For both Amazon RDS and Amazon EC2 you can change the EC2 instance type to match the resources required by your MySQL database Amazon Aurora provides a Serverless MySQL Compatible Edition that allows compute capacity to be auto scaled on demand based on application needs Both Amazon RDS and Amazon EC2 have an option to use EBS General Purpose SSD and EBS Provisioned IOPS volumes The ma ximum provisioned storage limit for Amazon RDS database (DB) instances running MySQL is 64 TB The EBS volume for MySQL on Amazon EC2 conversely supports up to 16 TB per volume This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 3 Horizontal scaling is also an option in MySQL where you can add MySQL secon dary servers or read replicas so that you can accommodate additional read traffic into your database With Amazon RDS you can easily enable this option through the AWS Management Console with click of a button Command Line Interface (CLI) or REST API A mazon RDS for MySQL allows up to five read replicas There are certain cases where you m ight need to enable specific MySQL replication features Some of these features may require OS access to MySQL or advanced privileges to access certain system procedure s and tables MySQL on Amazon EC2 is an alternative to Amazon RDS and Aurora for certain use cases It allows you to migrate new or existing workloads that have very specific requirements Choosing the right compute network and —especially —storage configu rations while taking advantage of its features plays a crucial role in achieving good performance at an optimal cost for your MySQL workloads Amazon EC2 blocklevel storage options There are two block level storage options for EC2 instances The first opt ion is an instance store which consists of one or more instance store volumes exposed as block I/O devices An instance store volume is a disk that is physically attached to the host computer that runs the EC2 virtual machine (VM) You must specify instan ce store volumes when you launch the EC2 instance Data on instance store volumes will not persist if the instance stops ends or if the underlying disk drive fails The second option is an EBS volume which provides off instance storage that will persist independently from the life of the instance The data on the EBS volume persist s even if the EC2 instance that the volume is attached to shuts down or there is a hardware failure on the underlying host The data persists on the volume until the volume is explicitly deleted Refer to Solid state drives (SSD) in the AWS documentation for the details about SSD backed EBS volumes Due to the immediate proximity of the instance to the instance store volume the I/O latency to an instance store volume tends to be lower than to an EBS volume Use cases for instance store volumes include acting as a layer of cache or buffer storing temporary dat abase tables or logs or providing storage for read replicas For a list of the instance types that support instance store volumes refer to Amazon EC2 instance store within the Amazon EC2 User Guide for Linux instances The remainder of this paper focus es on EBS volume backed EC2 instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC 2 Using Amazon EBS 4 EBS volume features EBS monitoring Amazon EBS automatically sends data points to Amazon CloudWatch for one minute intervals at no charge Amazon CloudWatch metrics are statistical data that you can use to view analyze and set alarms on the operational behavior of your volumes The EBS metrics can be viewed by selecting the monitoring tab of the volume in the Amazon EC2 console For more information about the EBS metrics collected by CloudWatch refer to the Amazon CloudWatch metrics for Amazon EBS EBS durability and availability Durability in the storage subsystem for MySQL is especially important if you are storing user data valuable production data and individual data points EBS volumes are designed for reliability with a 0 1 percent to 02 percent annual failure rate (AFR) compared to the typical 4 percent of commodity disk drives EBS volumes are backed by multiple physical drives for redundancy that is replicated within the Availability Zone to protect your MySQL workload from component failure EBS snapshots You can perform backups of your entire MySQL database using EBS snapshots These snapshots are stored in Amazon Simple Storage Service (S3) which is designed for 99999999999% (11 nines ) of durability To satisfy you r recovery point and recovery time objectives you can schedule EBS snapshots using Amazon CloudWatch Events Apart from providing backup other reasons for creating EBS snapshots of your MySQL database include : • Set up a non production or test environment — You can share the EBS snapshot to duplicate the installation of MySQL in different environments and also share between different AWS accounts within t he same Region For example you can restore a snapshot of your MySQL database that’s in a production environment to a test environment to duplicate and troubleshoot production issues • Disaster recovery — EBS snapshots can be copied from one AWS Region to another for site disaster recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 5 A volume that is restored from a snapshot loads slowly in the background which means that you can start using your MySQL database right away When you perform a query on MySQL that finds a table that has not been downlo aded yet the data will be downloaded from Amazon S3 You also have the option of enabling Amazon EBS fast snapshot restore to create a volume from a snapshot that is fully initialized at creation Refer to Amazon EBS fast snapshot restore for more information Best practices for restoring EBS snapshots are discussed in the MySQL backups section of this whitepaper EBS securit y Amazon EBS supports several security features you can use from volume creation to utilization These features prevent unauthorized access to your MySQL data You can use tags and resource level permissions to enforce security on your volumes upon creatio n Tags are key value pairs that you can assign to your AWS resources as part of infrastructure management These tags are typically used to track resources control cost implement compliance protocols and control access to resources through AWS Identity and Access Management (IAM) policies You can assign tags on EBS volumes during creation time which allows you to enforce the management of your volume as soon as it is created Additionally you can have granular control on who can create o r delete tags through the IAM resource level permissions This granularity of control extends to the RunInstances and CreateVolume APIs where you can write IAM policies that requires the encryption of the EBS volume upon creation After the volume is creat ed you can use the IAM resource level permissions for Amazon EC2 API actions where you can specify the authorized IAM users or groups who can at tach delete or detach EBS volumes to EC2 instances Protection of data in transit and at rest is crucial in most MySQL implementations You can use Secure Sockets Layer (SSL) to encrypt the connection from your application to your MySQL database To encr ypt your data at rest you can enable volume encryption during creation time The new volume will get a unique 256 bit AES key which is protected by the fully managed AWS Key Management Service EBS snapshots created from the encrypted volumes are automat ically encrypted The Amazon EBS encryption feature is available on all current generation instance types For more information on the supported instance types refer to the Amazon EBS Encryption documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 6 Elastic Volumes The Elastic Volumes feature of EBS SSD volumes allows you to dynamically change the size performance and type of EBS volume in a single API call or within the AWS Management Console without any interruption of MySQL operations This simplifies some of the administration and maintenance activities of MySQL workloads running on current generation EC2 instances You can call the ModifyVolume API to dynamically increase the size of the EBS volume if the MySQL database is running low on usable storage capacity Note that decreasing the size of the EBS volume isn’t supported so AWS recommend s that you do not over allocate the EBS volume size any more than necessary to avoid paying for extra resources that you do not use In situations where there is a planned increase in your MySQL utilizatio n you can either change your volume type or add additional IOPS The time it takes to complete these changes will depend on the size of your MySQL volume You can monitor the progress of the volume modification either through the AWS Management Console or CLI You can also create CloudWatch Events to send alerts after the changes are complete EBS volume types General Purpose SSD volumes General Purpose SSD volumes are designed to provide a balance of price and performance The General Purpose SSD (gp3) volumes offer cost effective storage that is ideal for a broad range of database workloads These volumes deliver a consistent baseline rate of 3000 IOPS and 125 MiB/s included with the price of storage You can provision additional IOPS (up to 16000) an d throughput (up to 1000 MiB/s) for an additional cost The maximum ratio of Provisioned IOPS to provisioned volume size is 500 IOPS per GiB The maximum ratio of provisioned throughput to Provisioned IOPS is 25 MiB/s per IOPS The following volume confi gurations support provisioning either maximum IOPS or maximum throughput: • 32 GiB or larger: 500 IOPS/GiB x 32 GiB = 16000 IOPS • 8 GiB or larger and 4000 IOPS or higher: 4000 IOPS x 025 MiB/s/IOPS = 1000 MiB/s This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 7 The older General Purpose SSD (gp2) volume is also a good option because it also offers balanced price and performance To maximize the performance of the gp2 volume you need to know how the burst bucket works The size of the gp2 volume determines the baseline performance level of the volume and how quickly it can accumulate I/O credits Depending on the volume size baseline performance ranges between a minimum of 100 IOPS up to a maximum of 16000 IOPS Volumes earn I/O credits at the baseline performance rate of 3 IOPS/GiB of volume size The larger the volume size the higher the baseline performance and the faster I/O credits accumulate Refer to General Purpose SSD volumes (gp2) for more inf ormation related to I /O characteristics and burstable performance of gp2 volumes In addition to changing the volume type size and provisioned throughput (for gp3 only); you can also use RAID 0 to stripe multiple gp2 or gp3 volumes together to achieve greater I/O performance The RAID 0 configuration distributes the I/O across volumes in a stripe Adding an additional volume also increases the throughput of your MySQL database Throughput is the read/write transfer rate which is the I/O block size multipl ied by the IOPS rate performed on the disk AWS recommend s adding the same volume size into the stripe set since the performance of the stripe is limited to the worst performing volume in the set Also consider fault tolerance in RAID 0 A loss of a single volume results in a complete data loss for the array If possible use RAID 0 in a MySQL primary/secondary environment where data is already replicated in multiple secondary nodes Provisioned IOPS SSD (io1) volumes Provisioned IOPS SSD (io1 and io2) volumes are designed to meet the needs of I/O intensive workloads particularly database workloads that are sensitive to storage performance and consistency Provisioned IOPS SSD volumes use a consistent IOPS rate which you specify when you create the volume and Amazon EBS delivers the provisioned performance 999 percent of the time • io1 volumes are designed to provide 998 to 999 percent volu me durability with an annual failure rate (AFR) no higher than 02 percent which translates to a maximum of two volume failures per 1000 running volumes over a one year period • io2 volumes are designed to provide 99999 percent volume durability with an AFR no higher than 0001 percent which translates to a single volume failure per 100000 running volumes over a one year period This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 8 The maximum ratio of Provisioned IOPS to requested volume size (in GiB) is 50:1 for io1 volumes and 500:1 for io2 volumes Fo r example a 100 GiB io1 volume can be provisioned with up to 5000 IOPS while a 100 GiB io2 volume can be provisioned with up to 50000 IOPS To maximize the volume throughput AWS recommend s using an EBSoptimized EC2 instance type (note that most new EC2 instances are EBS optimized by default with no extra charge) This provides dedicated throughput between your EBS volume and EC2 instance As instance s ize and type affects volume throughput choose an instance that has more channel bandwidth than the maximum throughput of the io1 volume For example an r512xlarge instance provides a maximum bandwidth of 9 500 MB/s Therefore it can more than handle th e 11875 MB/s maximum throughput of the io1 volume Another approach to increasing io1 throughput is to configure RAID 0 on your EBS volumes For more information about RAID configuration refer to RAID configuration in the EC2 User Guide MySQL considerations MySQL offers a lot of parameters that you can tune to obtain optimal performance for every type of workload This section focus es on the MySQL InnoDB storage engine It also look s at the MySQL parameters that you can optimize to improve performance related to the I/O of EBS volumes Caching Caching is an important feature in MySQL Knowing when MySQL will perform a disk I/O instead of accessing the cache will help you tune for performance When you are reading or writing data an InnoDB buffer pool caches y our table and index data This in memory area resides between your read/write operations and the EBS volumes Disk I/O will occur if the data you are reading isn’t in the cache or when the data from dirty (that is modified only in memory) InnoDB pages nee ds to be flushed to disk The buffer pool uses the Least Recently Used (LRU) algorithm for cached pages When you size the buffer pool too small the buffer pages may have to be constantly flushed to and from the disk which affects performance and lowers the query concurrency The default size of the buffer pool is 128 MB You can set this value to 80 percent of your server’s memory; however be aware that there may be paging is sues if other processes are consuming memory Increasing the size of the buffer pool works well when your dataset and queries can take advantage of it For example if you have one This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 9 GiB of data and the buffer pool is configured at 5 GiB then increasing the buffer pool size to 10 GiB will not make your database faster A good rule of thumb is that the buffer pool should be large enough to hold your “hot” dataset which is composed of the rows and indexes that are used by your queries Starting in MySQL 57 v ersion the innodb_buffer_pool_size can be set dynamically which allows you to resize the buffer pool without restarting the server Database writes InnoDB does not write directly to disk Instead it first writes the data into a double write buffer Dirty pages are the modified portion of these in memory areas The dirty pages are flushed if there isn’t enough free space The default setting (innodb_flush_n eighbors = 1 ) results in a sequential I/O by flushing the contiguous dirty pages in the same extent from the buffer pool This option should be turned off (by setting innodb_flush_neighbors = 0 ) so you can maximize the performance by spreading the write op erations over your EBS SSD volumes Another parameter that can be modified for write intensive workloads is innodb_log_file_size When the size of your log file is large there are fewer data flushes which reduces disk I/O However if your log file is too big you will generally have a longer recovery time after a crash MySQL recommends that the size of your log files should be large enough where your MySQL server will spread out the checkpoint flush activity over a longer period The recommendation from MySQL is to size the log file to where it can accommodate an hour of write activity MySQL read replica configuration MySQL allows you to replicate your data so you can scale out your read heavy workloads with primary / secondary (read replica) configurati on You can create multiple copies of your MySQL database into one or more secondary databases so that you can increase the read throughput of your application The availability of your MySQL database can be increased with the secondary When a primary ins tance fails one of the secondary servers can be promoted reducing the recovery time MySQL supports different replication methods There is the traditional binary log file position based replication where the primary’s binary log is synchronized with the secondary’s relay log The following diagram shows the binary log file position based replication process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 10 Binary log file position based replication process Replication between primary and secondary using global transaction identifiers (GTIDs) was introduced in MySQL 56 A GTID is a unique identifier created and associated with each transaction committed on the server of origin (primary) This identifier is unique not only to the server on which it originated but is unique across all servers in a given replication setup With GTID based replication it is no longer necessary to keep track of the binary log file or position on the primary to replay those events on the secondary The benefits of this solution include a more malleable replication topo logy simplified failover and improved management of multi tiered replication MySQL replication considerations Prior to MySQL 56 replication was single threaded with only one event occurring at a time Achieving throughput in this case was usually don e by pushing a lot of commands at low latency To obtain larger I/O throughput your storage volume requires a larger queue depth An EBS io1 SSD volume can have up to 20000 IOPS which in turn This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 11 means it has a larger queue depth AWS recommend s using thi s volume type on workloads that require heavy replication As mentioned in the Provisioned IOPS SSD volumes section of this document RAID 0 increases the performance and throughput of EBS volumes for your MySQL dat abase You can join several volumes together in a RAID 0 configuration to use the available bandwidth of the EBS optimized instances to deliver the additional network throughput dedicated to EBS For MySQL 56 and above replication is multi threaded This performs well on EBS volumes because it relies on parallel requests to achieve maximum I/O throughput During replication there are sequential and random traffic patterns There are the sequential writes for the binary log (binlog) shipment from the prima ry server and sequential reads of the binlog and relay log Additionally there is the traffic of regular random updates to your data files Using RAID 0 in this case improves the parallel workloads since it spreads the data across the disks and their queu es However you must be aware of the penalty from the sequential and single threaded workloads because extra synchronization is needed to wait for the acknowledgments from all members in the stripe Only use RAID 0 if you need more throughput than that which the single EBS volume can provide Switching from a physical environment to AWS Customers migrating from their physical MySQL Server environment into AWS usually have a battery backed caching RAID controller which allows data in the cache to survive a power failure Synchronous operations are set up so that all I/O is committed to the RAID controller cache instead of the OS main memory Therefore it is the controller instead of the OS that completes the write process Due to this environment the following MySQL parameters are used to ensure that there is no data loss: On the Primary Side sync_binlog = 1 innodb_flush_log_at_trx_commit=1 On the Secondary Slide sync_master_info = 1 sync_relay_log = 1 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 12 sync_relay_log_info = 1 innodb_flush_log _at_trx_commit=1 These parameters will cause MySQL to call fsync() to write the data from the buffer cache to the disk after any operation with the binlog and relay log This is an expensive operation that increases the amount of disk I/O The immediate synchronize log to disk MySQL parameter does not provide any benefit for EBS volumes In fact it causes degraded performance EBS volumes are automatically replicated within an Availability Zone which protects them from component failures Turning off the sync_binlog parameter allows the OS to determine when to flush the bin and relay log buffers to the disk reducing I /O The innodb_flush_log_at_trx_commit =1 is required for full ACID compliance If you need to synchronize the log to disk for every transaction then you may want to consider increasing the IOPS and throughput of the EBS volume In this situation you may want to separate the binlog and relay log from your data files as separate EBS volumes You can use Provisioned IOPS SSD volumes for the binlog and relay log to have more predictable performance You may also use the local SSD of the MySQL secondary instance if you need more throughput and IOP S MySQL backups Backup methodologies There are several approaches to protecting your MySQL data depending on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements The choice of performing a hot or cold backup is based on the uptime requirement of the database Wh en it comes meeting your RPO your backup approach will be based the logical database level or the physical EBS volume level backup This section explore s the two general backup methodologies The first general approach is to back up your MySQL data using database level methodologies This can include making a hot backup with MySQL Enterprise Backup making backups with mysqldump or mysqlpump or by making incremental backups by enabling binary logging If the primary database ser ver exhibits performance issues during a backup a replication secondary database server or a read replica database server can be created to provide the source data for the backups to alleviate the backup load from the primary This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 13 database server One approach can be to back up from a secondary server’s SSD data volume to a backup server’s Throughput Optimized HDD (st1) volume The high throughput of 500 MiB/s per volume and large 1 MiB I/O block size make it an ideal volume type for sequential backups meaning it can use the larger I/O blocks The following diagram shows a backup server using the MySQL secondary server to read the backup data Using an st1 volume as a backup source Another option is to have the MySQL secondary server back up the database files directly to Amazon Elastic File System (Amazon EFS) or Amazon S3 Amazon EFS is an elastic file system that stores its data redundantly across multiple Availability Zones Both the primary and the secondary instances can attach to the EFS file system The secondary instance can initiate a backup to the EFS file system from where the primary instance can do a restore Amazon S3 can also be used as a backup target Amazon S3 can be used in a manner similar to Amazon EFS except that Amazon S3 is object based storage rather than a file system The following diagram depicts the option of using Amazon EFS or Amazon S3 as a backup target This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 14 Using Amazon EFS or Amazon S3 as a backup target The second general approach is to use volume level EBS snapshot s Snapshots are incremental backups which means that only the blocks on the device that have changed after your most recent snapshot are saved This minimizes the time required to create the snapshot and saves on storage costs When you delete a snapshot only the data unique to that snapshot is removed Active snapshots contain all of the information needed to restore your data (from the time the snapshot was taken) to a new EBS volume One consideration when utilizing EBS snapshots for backups is to mak e sure the MySQL data remains consistent During an EBS snapshot any data not flushed from the InnoDB buffer cache to disk will not be captured There is a MySQL command flush tables with read lock that will flush all the data in the tables to disk and on ly allow database reads but put a lock on database writes The lock only needs to last for a brief period of time until the EBS snapshot starts The snapshot will take a point intime capture of all the content within that volume The database lock needs t o be active until This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 15 the snapshot process starts but it doesn’t have to wait for the snapshot to complete before releasing the lock You can also combine these approaches by using database level backups for more granular objects such as databases or tables and using EBS snapshots for larger scale operations such as recreating the database server restoring the entire volume or migrating a database server to another Availability Zone or another Region for disaster recovery (DR) Creating snapshots of an EB S RAID array When you take a snapshot of an attached EBS volume that is in use the snapshot excludes data cached by applications or the operating system For a single EBS volume this might not be a problem However when cached data is excluded from snap shots of multiple EBS volumes in a RAID array restoring the volumes from the snapshots can degrade the integrity of the array When creating snapshots of EBS volumes that are configured in a RAID array it is critical that there is no data I/O to or from the volumes when the snapshots are created RAID arrays introduce data interdependencies and a level of complexity not present in a single EBS volume configuration To create an application consistent snapshot of your RAID array stop applications from writing to the RAID array and flush all caches to disk At the database level (recommended) you can use the flush tables with read lock command Then ensure that the associated EC2 instance is no longer writing to the RAID array by taking steps such as free zing the file system with the sync and fsfreeze commands unmounting the RAID array or shutting down the associated EC2 instance After completing the steps to halt all I/O take a snapshot of each EBS volume Restoring a snapshot creates a new EBS volume then you assemble the new EBS volumes to build the RAID volumes After that you mount the file system and then start the database To avoid the performance d egradation after the restore AWS recommend s initializing the EBS volume The initialization of a large EBS volume can take some time to complete because data blocks have to be fetched from the S3 bucket where the snapshots are stored To make the database available in a shorter amount of time the initialization of the EBS volume can be done through multi threaded reads of all the required database files for th e engine recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 16 Monitoring MySQL and EBS volumes Monitoring provides visibility into your MySQL workload Understanding the resource utilization and performance of MySQL usually involves correlating the data from the database performance metrics gathere d from MySQL and infrastructure related metrics in CloudWatch There are many tools that you can use to monitor MySQL some of which include : • Tools from MySQL such as MySQL Enterprise Monitor MySQL Workbench Performance and MySQL Query Analyzer • Third party software tools and plugins • MySQL monitoring tools at the AWS Marketplace When the bottleneck for MySQL performance is related to storage database administrators usually look at latency when they run into performance issues of transactional operations Further if the performance is degraded due to MySQL loading or replicating data then throughput is evaluated These issues are diagnose d by looking at the EBS volume metrics collected by CloudWatch Latency Latency is defined as the delay between request and completion Laten cy is experienced by slow queries which can be diagnosed in MySQL by enabling the MySQL performance schema Latency can also occur at the disk I/O level which can be viewed in the “Average Read Latency (ms/op)” and “Average Write Latency (ms/op)” in the monitoring tab of the EC2 console This section covers the factors contributing to high latency High latency can result from exhausting the available Provisioned IOPS in you r EBS volume For gp2 volumes the CloudWatch metric BurstBalance is presented so that you can determine if you have depleted the available credit for IOPS When bandwidth (KiB/s) and throughput (Ops/s) are reduced latency (ms/op) increases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 17 BurstBalanc e metric showing that when bandwidth and throughput are reduced latency increases Disk queue length can also contribute to high latency Disk queue length refers to the outstanding read/write requests that are waiting for resources to be available The CloudWatch metric VolumeQueueLength shows the number of pending read/write operation requests for the volume This metric is an important measurement to monitor if you have reached the full utilization of the Provisioned IOPS on your EBS vol umes Ideally the EBS volumes must maintain an average queue length of about one per minute for every 200 Provisioned IOPS Use the following formula to calculate how many IOPS will be consumed based on the disk queue length: Consumed IOPS = 200 IOPS * VolumeQueueLength For example say you have assigned 2000 IOPS to your EBS volume If the VolumeQueueLength increases to 10 then you consum e all of your 2000 Provisioned IOPS which result s in increased latency Pending MySQL operations will stack up if you observe the increase of the VolumeQueueLength without any corresponding increase in the Provisioned IOPS as shown in the following screenshot This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazo n EC2 Using Amazon EBS 18 Average queue length and average read latency metrics Throughput Throughput is the read/write transfer rate to storage It affects MySQL database replication backup and import/export activities When considering which AWS storage option to use to achieve high throughput you must also consider that MySQL ha s random I/O caused by small transactions that are committed to the database To accommodate these two different types of traffic patterns our recommendation is to use io1 volumes on an EBS optimized instance In terms of throughput io1 volumes have a maximum of 320 MB/s per volume while gp2 volumes have a maximum of 160 MB/s per volume Insufficient throughput to underlying EBS volumes can cause MySQL secondary servers to lag and can also cause MySQL backups to take longer to complete To diagnose thro ughput issues CloudWatch provides the metrics Volume Read/Write Bytes (the amount of data being transferred) and Volume Read/Write Ops (the number of I/O operations) In addition to using CloudWatch metrics AWS recommend s reviewing the AWS Trusted Adviso r to check alerts when an EBS volume attached to an instance isn’t EBS optimized EBS optimization ensures dedicated network throughput for your volumes An EBS optimized instance has segregated traffic which is useful as many EBS volumes have significant network I/O activities Most new instances are EBS optimized by default at no extra charge This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 19 MySQL benchmark observations and considerations Testing your MySQL database will help you determine what type of volume you need and ensure that you are choosing the most cost effective and performant solution There are a couple of ways to determine the number of IOPS that you need For an existing workload you can monitor the current consumption of EBS volume IOPS through the CloudWatch metrics detailed in the Monitoring MySQL and EBS volumes section of this document If this is a new workload you can do a synthetic test which will provide you with the maximum number of IOPS that your new AWS infrastructure can achieve If you are moving your workload to the AWS Cloud you can run a tool such as iostat to profile the IOPS required by your workload While you can use a synthetic test to estimate your storage performance needs the best way to quantify your storage performance needs is through profiling an existing production database if that is an option Performing a synthetic test on the EBS volume allows you to specify the amount of concurrency and throughput that you want to simulate Testing will allow you to determine the maximum number of IOPS and throughput needed for your MySQL workload There are a couple of tools that you can use: • Mysqlslap is an application that emulates client load for MySQL Server • Sysbench is a popular open source benchmark used to test open source database management systems (DBMS) The test environment To simulate the MySQL client for the Sysbench tes ts this example uses an r58xlarge instance type with a 10 gigabit network interface Table 1: Sysbench machine specifications Sysbench server Instance type r58xlarge Memory 256 GB This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 20 Sysbench server CPU 32 vCPUs All of the MySQL servers tested on used the r58xlarge instance type Table 2: MySQL server machine specifications MySQL server Instance type r58xlarge Memory 256 GB CPU 32 vCPUs Storage 500 GB gp2 EBS Volume Root volume 256 GB gp2 MySQL data volume 500 GB (gp2 gp3 io1 or io2) To increase performance on the Sysbench Linux client enable Receive Packet Steering (RPS) and Receive Flow Steering (RFS) RPS generates a hash to determine which CPU will process the packet RFS handles the distribu tion of packets to the available CPUs Enable RPS with the following shell command : sudo sh c 'for x in /sys/class/net/eth0/queues/rx *; do echo ffffffff > $x/rps_cpus; done' sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 0/rps_flow_cnt" sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 1/rps_flow_cnt Enable RFS with the following shell command: sudo sh c "echo 32768 > /proc/sys/net/core/rps_sock_flow_entries" This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 21 Tuned compared to default configuration parameter testing Perform a Sysbench test to compare the difference between tuned MySQL and default parameter configurations ( refer to Table 3) Use a MySQL dataset of 100 tables with 10 million records per table for the test Table 3: MySQL parameters Parameters Default Tuned innodb_buffer_pool_size 134MB 193G innodb_flush_method fsync (Linux) O_DIRECT innodb_flush_neighbors 1 0 innodb_log_file_size 50MB 256MB Run the following Sysbench read/write command: $ sysbench / oltp_read_writelua <connection info> table_size=10000000 maxrequests=0 simpleranges=0 distinct ranges=0 sumranges=0 orderranges=0 pointselects=0 time=3600 threads=1024 randtype=uniform run Results of the Sysbench test are presen ted in Table 4 Under optimized conditions the MySQL server processed approximately 12 times the number of transactions per section compared to the default configuration Table 4: Sysbench results Sysbench metrics Default Tuned Queries: Read 17511928 223566532 Write 5003408 63876152 Other 2501704 31938076 Total 25017040 319380760 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 22 Sysbench metrics Default Tuned Transactions 1250852 (34737 per sec) 15969038 (443457 per sec) Queries 25017040 (694743 per sec) 319380760 (8869133 per sec) ignored errors: 0 (000 per sec) 0 (000 per sec) reconnects: 0 (000 per sec) 0 (000 per sec) General statistics: Total time 36009046s 36010355s Total number of events 1250852 15969038 Latency (ms): min 772 4843 avg 294765 23090 max 9588504 615804 95th percentile 928415 125808 sum 368707402445 368718958127 Thread fairness: events (avg/stddev): 12215352/4886 155947637/4563 runtime (avg/stddev): 36006582/011 36007711/004 Other InnoDB configuration options to consider for better performance of heavy I/O MySQL workloads are detailed in the MySQL Optimizing InnoDB Disk I/O documentation When conside ring these configurations AWS suggest s performing a test after deployment to ensure that it will be safe for your application Comparative analysis of different storage types Conduct the test across four different MySQL server configurations with the foll owing configurations: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 23 • MySQL Server EBS General Purpose SSD (gp2) o 500 GB SQL data drive o 1500 baseline IOPS / 3000 burstable IOPS • MySQL Server EBS Provisioned IOPS SSD (gp3) o 500 GB SQL data drive o 3000 Provisioned IOPS • MySQL Server EBS Provisioned IOPS SSD (io1) o 500 GB SQL data drive o 3000 Provisioned IOPS • MySQL Server EBS Provisioned IOPS SSD (io2) o 500 GB SQL data drive o 3000 Provisioned IOPS *Note: Unless specified all EBS volumes are unencrypted Sysbench client and MySQL server setup Table 5 : Server setup for MySQL database and Sysbench client Use case Instance type vCPUs Memory Instance storage EBS optimized Network MySQL database r58xlarge 32 256 EBS only Yes 10 Gigabit Sysbench client (AWS Cloud9) r58xlarge 32 256 EBS only Yes 10 Gigabit Tests were performed using Sysbench read/write OLTP test by running the following Sysbench command below over a onehour period $ sysbench /oltp_read_writelua <connection info> table_size=10000000 maxrequests=0 simpleranges=0 distinct ranges=0 sumranges=0 orderranges=0 pointselects=0 time=3600 threads=1024 randtype=uniform run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 24 Results The various tests of the four different volume configurations yielded similar results with each server processing approximately 3 600 Sysbench transactions per second There was no discernable workload difference is noticed while running performance consistency test in all four volumes Upon closer examination you observe that the minimum latency is offered by the IO2 volume and less than one millisecond latency is observed for the same workload Table 6 : Performance analysis of same MySQL workload on different EBS volume types Sysbench metrics gp2 gp3 io1 io2 SQL statistics read queries 17511928 181507690 188343428 186051460 write queries 5003408 51859340 53812408 53157560 other queries 2501704 25929670 26906204 26578780 total queries 25017040 259296700 269062040 265787800 transactions 12508520 (347037 per sec) 12964835 (360093 per sec) 13453102 (373312 per sec) 13289390 (369020 per sec) queries 250170400 (6947043 per sec) 259296700 (7201853 per sec) 269062040 (7466242 per sec) 265787800 (7380392 per sec) Latency (ms) min 772 682 61 602 avg 29465 28435 27424 27745 max 9588504 4371824 3317931 3480375 95th percent ile 92815 81663 94316 86195 sum 368707402445 368655915883 36893868342 368713853608 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Ama zon EC2 Using Amazon EBS 25 Sysbench metrics gp2 gp3 io1 io2 EBS statistics Write latency (ms) 11 101 0994 0824 Volume queue length (count) 349 301 3227 271 Conclusion The AWS Cloud provides several options for deploying MySQL and the infrastructure supporting it Amazon RDS for MySQL provides a very good platform to operate scale and manage your MySQL database in AWS It removes much of the complexity of managing and maintaining your database allowing you to focus on improving your applications However there are some cases where MySQL on Amazon EC2 and Amazon EBS that work better for some workloads a nd configurations It is important to understand your MySQL workload and test it This can help you decide which EC2 server and storage to use for optimal performance and cost For a balanced performance and cost consideration General Purpose SSD Amazon EBS volumes (gp2 and gp3) are good options To maximize the benefit of gp2 you need to understand and monitor the burst credit This will help you determine whether you should consider other volume types On the other hand gp3 provides predictable 3000 I OPS baseline performance and 125 MiB/s regardless of volume size With gp3 volumes you can provision IOPS and throughput independently without increasing storage size at costs up to 20 percent lower per GB compared to gp2 volumes If you have mission critical MySQL workloads that need more consistent IOPS then you should use Provisioned IOPS volumes (io1 or io2) To maximize the benefit of both General Purpose and Provisioned IOPS volume types AWS recommend s using EBS optimized EC2 instances and tuni ng your database parameters to optimize storage consumption This ensures dedicated network bandwidth for your EBS volumes You can cost effectively operate your MySQL This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingmysqlonec2usingamazonebs/optimizing mysqlonec2usingamazonebshtmlAmazon Web Services Optimizing MySQL Running on Amazon EC2 Using Amazon EBS 26 database in AWS without sacrificing performance by taking advantage of the durability availability and elasticity of the EBS volumes Contributors Contributors to this document include : • Marie Yap Enterprise Solutions Architect Amazon Web Services • Ricky Chang Cloud Infrastructure Architect Amazon Web Services • Kehinde Otubamowo Database Partner Solutions Architect Amazon Web Services • Arnab Saha Cloud Support DBA Amazon Web Services • Chi Dinjors Cloud Support Engineer Amazon Web Services Further reading For additional information refer to : • MySQL Performance Tuning 101 • MySQL 57 Performance Tuning Immediately After Installation • MySQL on EC2: Consistent Backup and Log Purging using EBS Snapshots and N2WS • MySQL Database Backup Methods Document revisions Date Description December 7 2021 Updated for technical accuracy November 2017 First publication
|
General
|
consultant
|
Best Practices
|
Oracle_WebLogic_Server_12c_on_AWS
|
ArchivedOracle WebLogic Server 12c on AWS December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or service s each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 3 Contents Introduction 5 Oracle WebLogic on AWS 6 Oracle WebLogic Components 6 Oracle WebLogic Architecture on AWS 8 Auto Scaling your Oracle WebLogic Cluster 15 Monitoring your Infrastructure 19 AWS Security and Compliance 20 Oracle WebLogic on AWS Use Cases 23 Conclusion 24 Contributors 25 Document Revisions 25 ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 4 Abstract This whitepaper provides guidance on how to deploy Oracle WebLogic Server 12cbased applications on AWS This paper provides a reference architecture and information about best practices for high availability security scalability and performance when yo u deploy Oracle WebLogic Server 12cbased applications on AWS Also included is information about cost optimization using AWS A uto Scaling The target audience of this whitepaper is Solution Architects Systems Architects and System Administrators with a basic understanding of cloud computing AWS and Oracle WebLogic 12c ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 5 Introduction Many enterprises today rely on J2EE application servers for deploying their mission critical applications Oracle Web Logic Server is a popular Java application server for deploying such applications You can use various AWS services to deploy Oracle WebLogic Server 12cbased applications on AWS in a secure highly available and cost effective manner With auto scaling you can dynamically scale the compute resou rces required for your application thereby keeping your costs low and using Amazon Elastic File System (EFS) for shared storage This whitepaper assumes that you have a basic understanding of Amazon Web Services For an overview of AWS Services see Overview of Amazon Web Services ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 6 Oracle WebLogic on AWS It is important to have a good understanding of the architecture of Oracle WebLogic Server 12c ( Oracle WebLogic ) and the major WebLogic components to successfully deploy and configure it on AWS Oracle WebLogic Components This diagram shows the major components of Oracle WebLogic Application Server Each WebLogic deployment has a WebLogic Domain which typically contains multiple WebLogic Server instances A WebLogic domain is the basic unit of administration for WebLogic Server instances : it is a group of logically related WebLogic Server resources For example you can have one WebLogic domain for each application There are two types of WebLogic Server instances in a domain : a single Administration Server and one or more Managed S ervers Each WebLogic Server instance runs its own Java Virtual Machine (JVM) and can be configured individually You deploy and run your web applications EJBs and other resources on the Managed S erver instances T he Administration S erver is used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 7 to configur e manage and monitor the resources in the domain including the Managed Server instances WebLogic Server instances referred to as WebLogic Server Machines can run on physical or virtual servers ( such as Amazon EC2) or in conta iners The Node Manager is a utility used to start stop or restart the Administration server or Managed Server instances You can create a group of multiple WebLogic Managed Servers which is known as a WebLogic cluster WebLogic clusters support load ba lancing and failover and are required for high a vailability and scalability of your production deployments You should deploy your WebLogic cluster across multiple WebLogic Machines so that the loss of a single WebLogic Machine does not impact the availabi lity of your application ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 8 Oracle WebLogic Architecture on AWS This reference architecture diagram shows how you can deploy a web application on Oracle WebLogic on AWS This is a basic combined tier architecture with static HTTP pages servlets and EJBs that are deployed together in a single WebLogic cluster You can also deploy the static HTTP pages and servlets to a separate WebLogic cluster and the EJBs to another WebLogic cluster For more information about WebLogic architectural patterns see the Oracle WebLogic Server documentation This reference architecture includes a WebLogic domain with one Administrative Server and multiple Managed Servers These Managed Servers are part of a WebLogic cluster and are deployed on EC2 instances (WebLogic Machines) across two Availability Zones for high availability The application is deployed to the Managed Servers in the cluster that spans the two Availability Zones Amazon EFS is used for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 9 AWS Availability Zones The AWS Cloud infrastructure is built around AWS Regions and Availability Zones AWS Regions provide multiple physically se parated and isolated Availability Zones which are connected with low latency high throughput and highly redundant networking Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity and housed in separate facilities as shown in the following diagram These Availability Zones enable you to operate production applications and databases that are more highly available fault tolerant and scalable than is possible from a single data center You can deploy your application on EC2 instances across multiple zones In the unlikely event of failure of one Availability Zone user requests are routed to your application instances in the second zone This ensures that your application continues to rem ain available at all times Traffic Distribution and Load Balancing Amazon Route 53 DNS is used to direct users to your application deployed on Oracle WebLogic on AWS Elastic Load Balancing (ELB) is used to distribute incoming requests across the WebLogic Managed Servers deployed on Amazon EC2 instances in multiple Availability Zones The load balancer serves as a single point of contact for client requests which enables you to increase the availability of your application You can add and remove WebLogic Managed Server instances from your load balancer as your needs change either manually or with Auto Scaling without disrupting the overall flow of information ELB ensures that only healthy ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 10 instances receive traffic by detecting unhealthy instances and rerouting traffic across the re maining healthy instances If an instance fails ELB automatically reroutes the traffic to the remai ning running instances If a fai led instance is restored ELB restores the traffic to that instance Use Multiple Availability Zones for High Availability Each Availability Zone is isolated from other Availability Zones and runs on its own physically distinct independent infrastructure The likelihood of two Availability Zones experiencing a failure at the same time is relatively small To ensure high availability of your application you can deploy your WebLogic Managed Server instances across multiple Availability Zones You then deploy your application on the Managed Servers in the WebLogic cluster which spans two Availability Zones In the unlikely event of an Availability Zone failure user requests to the zone with the failure are routed by Elastic Load Balancing to t he Managed Servers deployed in the second Availability Zone This ensures that your application continues to remain available regardless of a zone failure You can configure WebLogic to replicate the HTTP session state in memory to another Managed Server in the WebLogic cluster WebLogic tracks the location of the Managed Server s hosting the primary and the replica of the session state using a cookie If the Managed Server hosting the primary copy of the session state fails WebLogic can retrieve th e HTTP session state from the replica For more information about HTTP session state replication see the Oracle WebLogic documentation For shared storage you can use Amazon EFS which is designed to be highly available and durable Your data in Amazon EFS is redundantly stored across multiple Availability Zones which means that your data is available if there is an Availability Zone failure For information a bout how to use Amazon EFS for shared storage see the Shared Storage section Administration Server High Availability The Administration Server is used to configure manage and monitor the resources in the domain including the Managed Server instances Because the failure of the Administration Server does not affect the functioning of the Managed Servers in the domain the Managed Servers continue to run and you r ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 11 application is still available However if the Administration Server fails the WebLogic administration console is unavailable and you cannot make changes to the domain configuration If the underlying host for the Administration Server experiences a failure you can use the Amazon EC2 Auto Recovery feature to recover the failed server instances When using Amazon EC2 Auto Recovery several system status checks monitor the instance and the other components that need to be running for your instance to function as expected Among other th ings the system status checks look for loss of network connectivity loss of system power software issues on the physical host and hardware issues on the physical host If a system status check of the underlying hardware fails the instance will be rebo oted (on new hardware if necessary) but will retain its instance ID IP address Elastic IP addresses EBS volume attachments and other configuration details Another option is to put the Administration Server instances in an Auto Scaling group that spans multiple Availability Zones and set the minimum and maximum size of the group to one Auto Scaling ensures that an instance of the Administration Server is running in the selected Availability Zones This solution ensures high availability of the Adminis tration Server if a zone failure occurs Storage If you use file based persistence you must have storage for the WebLogic product binaries common files and scripts the domain configuration files logs and persistence stores for JMS and JTA You can either use shared storage or Amazon EBS volumes to store these files Shared Storage To store the shared files related to your WebLogic deployment you can use Amazon EFS which supports NFSv4 and will be mounted by all the instances that are part of the WebL ogic cluster In the reference architecture we use Amazon EFS for shared storage The WebLogic product binaries common files and scripts the domain configuration files and logs are stored in Amazon EFS which includes the commons domains middleware and logs file systems This table describes each of these file systems ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 12 File System Description commons For common files such as installation files response files and scripts domains For WebLogic Domain files such as configuration runtime and temporary files middleware For binaries such as Java VM and Oracle WebLogic i nstallation logs For log files Amazon EFS has two throughput modes for your file system : Bursting Throughput and Provisioned Throughput With Bursting Throughput mode throughput on Amazon EFS scales as your file system grows With Provisioned Throughput mode you can instantly provision the throughput of your file system in MiB/s independent of the amount of data stored For better performance we recommend you select Provisioned Throughput mode while using Amazon EFS With Provisioned Throughput mode you can provision up to 1024 MiB/s of throughput for your file system You can change the file system throughput in Provisioned Throughput mode at any time after you create the file system If you are deploying your application in a region where Amazon EFS is not yet available t here are several third party products by vendors such a s NetApp and SoftNAS available on the AWS Marketplace that offer a shared storage solution on AWS Amazon EBS Volumes In this reference architecture we use Am azon EFS for shared storage You can also deploy Oracle WebLogic on AWS without using shared storage Instead you can use Amazon EBS volumes attached to your Amazon EC2 instances for storage Make sure to select the General Purpose (gp2) volume type for s toring the WebLogic product binaries common files and scripts the domain configuration files and logs GP2 volumes a re backed by solid state drives (SSDs) designed to offer single digit millisecond latencies and are suitable for use with Oracle WebLogic ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 13 Scalability When you use AWS you can scale your application easily because of the elastic nature of the cloud You can scale your application vertically and horizontally Vertical Scaling You can vertically scale or scale up your application simply by changing the EC2 instance type on which your WebLogic Managed Servers are deployed to a larger instance type and then increasing the WebLogic JVM heap size You can modify the Java heap size with the Xms (initial heap size ) and Xmx (maximum heap size ) parameters Ideally you should set both the initial heap size ( Xms) and the maximum heap size ( Xmx) to the same value to minimize garbage collections and optimize performance For example you can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x1e32xlarge instance with 128 vCPUs and 3904 GiB RAM For the most updated list of Amazon EC2 instance types see the Amazon EC2 Instance Ty pes page on the AWS website After you select a new instance type you simply restart the instance for the changes to take effect Typically the resizing operation is completed in a few minutes the Amazon EBS volumes remain attached to the instances and no data migration is required Horizontal Scaling You can horizontally scale or scale out your application by adding more Managed Servers to your WebLogic cluster depending on the user traffic or on a particular schedule You l aunch new EC2 instance s to deploy and configure additional Managed Servers add them to the WebLogic cluster and register your instance s with the ELB You can automate this process with AWS Auto Scaling and WebLogic scripting For more information see the Auto Scaling your Oracle WebLogic Cluster section AWS Auto Scaling for scaling out your WebLogic cluster also requires scripting which can be an additional technical investment While we recommend that you use AWS Au to Scaling sometimes you might not have the time or the technical resources to implement it while migrating your WebLogic application to AWS A simpler alternative might be to use standby instances ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 14 Standby Instances To meet extra capacity requirements a dditional instances of the WebLogic Managed Servers are preinstalled and configured on EC2 instances These standby instances can be shut down until the extra capacity is required You do not incur compute charges when instances are shut down you incur only Amazon Elastic Block Store (Amazon EBS) storage charges These preinstalled standby instances provide you the flexibility to meet additional capacity when you need it ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 15 Auto Scaling your Oracle WebLogic Cluster You can use AWS Auto Scaling to horizontally scale your applications based on demand This helps you to maintain steady predictable performance at the lowest possible cost For example you can configure AWS Auto Scaling to automatically create and add more Managed Servers to your WebLogic cluster as the traffic increases and to stop and remove Managed Servers from the WebLogic cluster as the traffic decreases For more information about Auto Scaling see the Amazon EC2 Auto Scaling documentation This diagram shows how AWS Auto Scaling works with Oracle WebLogic In this example we use Amazon EFS for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 16 To Auto Scale your WebLogic cluster on AWS you must complete these major steps 1 Install and c onfigure WebLogic – The first step is to configure Amazon EFS for shared storage install Oracle WebLogic and configure the WebLogic Domain and the WebLogic clus ter Amazon EFS is used to store the WebLogic product binaries common files and scripts the domain configuration files and logs 2 Configure AWS Auto Scaling – Next you have to configure AWS Auto Scaling to launch and terminate EC2 instances —or WebLogic Machines —based on the application workload 3 Configure WebLogic scaling scripts – Finally you c reate WebLogic Scripting Tool (WLST) scripts These scripts create and add or remove the Managed Servers from the WebLogic cluster when AWS Auto Scaling launches or terminates EC2 instances in the auto scaling group Configure Oracle WebLogic To configure Oracle WebLogic and setup shared storage you must complete these high level steps 1 Create the commons domains middleware and logs file systems on Amazon EFS as described in the Shared Storage section 2 Create an EC2 instance for deploying the WebLogic Administration Server and mount the EFS file systems In the reference architecture we have created the following direc tory structure to store the WebLogic binaries domain configurations common scripts and logs ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 17 3 Install Oracle WebLogic The ORACLE_HOME directory should be located on a shared folder (/middleware) on EFS 4 Create the WebLogic domain You can use the Basic WebLogic Server Domain Template in the /templates/wls/wlsjar' directory to create the domain 5 Create a WebLogic cluster in the domain and set the cluster messaging mode to Unicast Config ure AWS Auto Scali ng To configure AWS Auto Scaling to launch and terminate EC2 instances (or WebLogic Machines ) based on the application load you must complete the following high level steps For more details on Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website 1 Create a launch configuration and an Auto Scaling group 2 Create the scale in and scale out policies For example you can create a scaling policy to add an instance when the CPU utilization is >80 % and to remove an instance when the CPU utilization is <60 % 3 If you are using inmemory session persistence Oracle WebLogic replicates the session data to another Manage d Server in the cluster You should ensure that the Auto Scaling s cale down process terminate s only one Managed Server at a time to make sure you do not destroy the master and the replica of the session at the same time For detailed step bystep instruc tions on how to configure Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website Configure WebLogic Scaling Scripts Based on the traffic to your application Auto Scaling can create and add new EC2 instances (scaling out) or remove existing EC2 instances (scaling in) from your auto scaling group You must create the following scripts to automate the configuration of WebLogic in an auto scaled environment • EC2 configuration scripts – These script s mount the EFS filesystems invoke the WLST scripts to configure and start the WebLogic Managed Server on the start up of the EC2 instance and invoke the WLST scripts to stop the WebLogic Managed Server on shutdown of the EC2 instance ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 18 You can pass this script with the EC2 user data For detailed information see the Amazon EC2 documentation on t he AWS website • WebLogic Scripting Tool (WLST ) scripts – WLST is a command line scripting interface used to manage WebLogic Server instances and domains These scripts create and add the Manage d Server to your WebLogic cluster when Auto Scaling adds a new EC2 instance to the Auto Scaling group These scripts also stop and remove the Managed Server from your WebLogic cluster when Auto Scaling removes the EC2 instance from the Auto Scaling group For more information see the Oracle WLST documentation ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 19 Monitoring your Infrastructure After you migrate your Oracle WebLogic application s to AWS you can continue to use the monitoring tools you are familiar with to monitor your Oracle WebLogic environment and the application you deployed on WebLogic You can use Fusion Middleware Control the Oracle WebLogic Server Administration Console or the command line (using the WSLT state command) to monitor your Oracle WebLogic infrastructure components This includes WebLogic domains Managed Servers and clusters You can also monitor the Java applications deployed and get information such as the state of your application the number of active sessions and response times For more information about how to monitor Oracle WebLogic see the Oracle WebLogic documentation You can also use Amazon CloudWatch to monitor AWS Cloud resources and the applications you run on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes Amazon EF S ELB load balancers and Amazon RDS DB instances Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom application and system metrics such as memory usage transaction volumes or error rates which Amazon CloudWatch will also monitor With Amazon CloudWatch alarms you can set a threshold on metrics and trigger an action when that threshold is exceeded For example you can create an alarm that is tri ggered when the CPU utilization on an EC2 instance crosses a threshold You can also configure a notification of the event to be sent through SMS or email Real time alarm s for metrics and events enable you to minimize downtime and potential business impact If your application uses a database deployed on Amazon RDS y ou can use the Enhanced Monitoring feature of Amazon RDS to monitor your database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 20 AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center but without the costs and complexities invol ved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security see the AWS Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide c ustomers with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center The AWS Security Mode l The AWS infrastructure has been architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is different than security in your on premises data center s When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In the AWS cloud model AWS is responsible for securing the underlying infrastructure that supports the cloud and you are responsible for securing workloads that you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways and gives you the flexibility you need to implement the most applicable security controls for you r business functions in the AWS environment ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 21 Figure 6: The AWS shared responsibility model When you deploy Oracle WebLogic applications on AWS we recommend that you take advantage of the various security features of AWS such as AWS Identity and Access Management monitoring and logging network security and data encryption AWS Identity and Access Management With AWS Identity and Access Management (IAM) you can centrally manage your users and their security credentials such as passwords access keys and permissions policies which control the AWS services and resources that users can access IAM supports multifactor authentication (MFA) for privileged accounts including options for hardware based authenticators and support for integration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you The recorded information in the log files includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made The AWS API call history produced by ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 22 CloudTrail enables security analysis resource change tracking and compliance auditing Network Security and Amazon Virtual Private Cloud In each Amazon Virtual Private Cloud (VPC) you create one or more subnets Each instance you launch in your VPC is connected to one subnet Traditional layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network ACLs which are stateless traffic filters that apply to all inbound or outbound traffic from a subnet within your VPC These ACLs can contain ordered r ules to allow or deny traffic based on IP protocol by service port and by source and destination IP address Security groups are a complete firewall solution that enable filtering on both ingress and egress traffic from an instance Traffic can be restri cted by any IP protocol by service port as well as source and destination IP address (individual IP address or classless inter domain routing (CIDR) block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Amazon RDS for SQL Server and Amazon Re dshift Flexible key management options allow you to choose whether to have AWS manage the encryption keys using the AWS Key Management Service o (AWS KMS) or to maintain complete control over your keys Dedicated hardware based cryptographic key storage options (AWS CloudHSM) are available to help you satisfy compliance requirements For more information see the Introduction to AWS Security and AWS Security Best Practices whitepapers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 23 Oracle WebLogic on AWS Use Cases Oracle WebLogic customers use AWS for a variety of use cases including these environments: • Migration of existing Oracle WebLogic production environments • Implementation of new Oracle WebLogic production environments • Implementing disaster recovery environments • Running Oracle WebLogic development test demonstration proof of concept (POC) and t raining environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 24 Conclusion AWS can be an extremely cost effective secure scalable high perform ing and flexible option for deploying Oracle WebLogic applications By deploying Oracle WebLogic applications on the AWS Cloud you can reduce costs and simultaneously enable capabilities that might not be possible or cost effective if you deployed your application in an on premises data center Some of the benefits of deploying Oracle WebLogic on AWS include: • Low cost – Resources are billed by the hour and only for the duration they are used • Eliminate the need for large capital outlays – Replace large upfront expenses with low variable payments that only apply to what you use • High availability – Achieve high availability by deploying Oracle WebLogic in a Multi AZ configuration • Flexibility –Add compute capacity elastically to cope with demand • Testing – Add test environments use them for short durations and pay only for the duration they are used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 25 Contributors The following individuals and organizations contributed to this document: Ashok Sundaram Solutions Architect Amazon Web Services Document Revisions Date Description December 2018 First publication
|
General
|
consultant
|
Best Practices
|
Overview_of_Amazon_Web_Services
|
Overview of Amazon Web Services AWS Whitepaper Overview of Amazon Web Services AWS Whitepaper Overview of Amazon Web Services: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonOverview of Amazon Web Services AWS Whitepaper Table of Contents Overview of Amazon Web Services1 Abstract1 Introduction1 What Is Cloud Computing? 2 Six Advantages of Cloud Computing3 Types of Cloud Computing4 Cloud Computing Models 4 Infrastructure as a Service (IaaS)4 Platform as a Service (PaaS)4 Software as a Service (SaaS)4 Cloud Computing Deployment Models4 Cloud 4 Hybrid 5 Onpremises5 Global Infrastructure6 Security and Compliance7 Security7 Benefits of AWS Security7 Compliance7 Amazon Web Services Cloud9 AWS Management Console9 AWS Command Line Interface9 Software Development Kits10 Analytics10 Amazon Athena10 Amazon CloudSearch10 Amazon Elasticsearch Service11 Amazon EMR11 Amazon FinSpace11 Amazon Kinesis11 Amazon Kinesis Data Firehose12 Amazon Kinesis Data Analytics12 Amazon Kinesis Data Streams12 Amazon Kinesis Video Streams12 Amazon Redshift12 Amazon QuickSight13 AWS Data Exchange13 AWS Data Pipeline13 AWS Glue13 AWS Lake Formation14 Amazon Managed Streaming for Apache Kafka (Amazon MSK)14 Application Integration 14 AWS Step Functions15 Amazon AppFlow15 Amazon EventBridge15 Amazon Managed Workflows for Apache Airflow (MWAA)15 Amazon MQ16 Amazon Simple Notification Service16 Amazon Simple Queue Service16 Amazon Simple Workflow Service16 AR and VR 16 Amazon Sumerian17 Blockchain17 Amazon Managed Blockchain17 iiiOverview of Amazon Web Services AWS Whitepaper Business Applications 17 Alexa for Business 18 Amazon Chime18 Amazon SES18 Amazon WorkDocs18 Amazon WorkMail18 Cloud Financial Management 19 AWS Application Cost Profiler19 AWS Cost Explorer19 AWS Budgets19 AWS Cost & Usage Report19 Reserved Instance (RI) Reporting20 Savings Plans20 Compute Services20 Amazon EC220 Amazon EC2 Auto Scaling21 Amazon EC2 Image Builder21 Amazon Lightsail22 AWS App Runner22 AWS Batch22 AWS Elastic Beanstalk22 AWS Fargate22 AWS Lambda23 AWS Serverless Application Repository23 AWS Outposts23 AWS Wavelength23 VMware Cloud on AWS24 Contact Center24 Amazon Connect24 Containers 25 Amazon Elastic Container Registry25 Amazon Elastic Container Service25 Amazon Elastic Kubernetes Service25 AWS App2Container25 Red Hat OpenShift Service on AWS26 Database 26 Amazon Aurora26 Amazon DynamoDB26 Amazon ElastiCache27 Amazon Keyspaces (for Apache Cassandra)27 Amazon Neptune27 Amazon Relational Database Service28 Amazon RDS on VMware28 Amazon Quantum Ledger Database (QLDB)28 Amazon Timestream29 Amazon DocumentDB (with MongoDB compatibility)29 Developer Tools29 Amazon Corretto29 AWS Cloud929 AWS CloudShell30 AWS CodeArtifact30 AWS CodeBuild30 AWS CodeCommit30 AWS CodeDeploy30 AWS CodePipeline30 AWS CodeStar31 AWS Fault Injection Simulator31 ivOverview of Amazon Web Services AWS Whitepaper AWS XRay31 End User Computing 31 Amazon AppStream 2032 Amazon WorkSpaces32 Amazon WorkLink32 FrontEnd Web & Mobile Services32 Amazon Location Service33 Amazon Pinpoint33 AWS Amplify33 AWS Device Farm34 AWS AppSync34 Game Tech34 Amazon GameLift34 Amazon Lumberyard34 Internet of Things (IoT)34 AWS IoT 1Click35 AWS IoT Analytics35 AWS IoT Button36 AWS IoT Core36 AWS IoT Device Defender36 AWS IoT Device Management37 AWS IoT Events37 AWS IoT Greengrass37 AWS IoT SiteWise37 AWS IoT Things Graph38 AWS Partner Device Catalog38 FreeRTOS38 Machine Learning 39 Amazon Augmented AI40 Amazon CodeGuru40 Amazon Comprehend40 Amazon DevOps Guru40 Amazon Elastic Inference41 Amazon Forecast41 Amazon Fraud Detector42 Amazon HealthLake42 Amazon Kendra42 Amazon Lex42 Amazon Lookout for Equipment43 Amazon Lookout for Metrics43 Amazon Lookout for Vision43 Amazon Monitron43 Amazon Personalize44 Amazon Polly44 Amazon Rekognition44 Amazon SageMaker45 Amazon SageMaker Ground Truth45 Amazon Textract46 Amazon Transcribe46 Amazon Translate46 Apache MXNet on AWS46 AWS Deep Learning AMIs47 AWS DeepComposer47 AWS DeepLens47 AWS DeepRacer47 AWS Inferentia47 TensorFlow on AWS48 vOverview of Amazon Web Services AWS Whitepaper Management and Governance48 Amazon CloudWatch48 AWS Auto Scaling49 AWS Chatbot49 AWS Compute Optimizer49 AWS Control Tower49 AWS CloudFormation50 AWS CloudTrail50 AWS Config50 AWS Launch Wizard51 AWS Organizations51 AWS OpsWorks51 AWS Proton51 AWS Service Catalog51 AWS Systems Manager52 AWS Trusted Advisor53 AWS Personal Health Dashboard53 AWS Managed Services53 AWS Console Mobile Application53 AWS License Manager54 AWS WellArchitected Tool54 Media Services54 Amazon Elastic Transcoder55 Amazon Interactive Video Service55 Amazon Nimble Studio55 AWS Elemental Appliances & Software55 AWS Elemental MediaConnect55 AWS Elemental MediaConvert56 AWS Elemental MediaLive56 AWS Elemental MediaPackage56 AWS Elemental MediaStore56 AWS Elemental MediaTailor56 Migration and Transfer57 AWS Application Migration Service57 AWS Migration Hub57 AWS Application Discovery Service57 AWS Database Migration Service58 AWS Server Migration Service58 AWS Snow Family58 AWS DataSync59 AWS Transfer Family59 Networking and Content Delivery60 Amazon API Gateway60 Amazon CloudFront60 Amazon Route 5360 Amazon VPC61 AWS App Mesh61 AWS Cloud Map62 AWS Direct Connect62 AWS Global Accelerator62 AWS PrivateLink63 AWS Transit Gateway63 AWS VPN63 Elastic Load Balancing 63 Quantum Technologies64 Amazon Braket64 Robotics64 viOverview of Amazon Web Services AWS Whitepaper AWS RoboMaker64 Satellite 65 AWS Ground Station65 Security Identity and Compliance65 Amazon Cognito66 Amazon Cloud Directory66 Amazon Detective67 Amazon GuardDuty67 Amazon Inspector67 Amazon Macie68 AWS Artifact68 AWS Audit Manager68 AWS Certificate Manager68 AWS CloudHSM69 AWS Directory Service69 AWS Firewall Manager69 AWS Identity and Access Management69 AWS Key Management Service70 AWS Network Firewall70 AWS Resource Access Manager70 AWS Secrets Manager71 AWS Security Hub71 AWS Shield71 AWS Single SignOn72 AWS WAF72 Storage 72 Amazon Elastic Block Store72 Amazon Elastic File System73 Amazon FSx for Lustre73 Amazon FSx for Windows File Server73 Amazon Simple Storage Service74 Amazon S3 Glacier74 AWS Backup74 AWS Storage Gateway74 Next Steps75 Conclusion 75 Resources76 Document Details 77 Contributors 77 Document Revisions77 AWS glossary78 viiOverview of Amazon Web Services AWS Whitepaper Abstract Overview of Amazon Web Services Publication date: August 5 2021 (Document Details (p 77)) Abstract Amazon Web Services offers a broad set of global cloudbased products including compute storage databases analytics networking mobile developer tools management tools IoT security and enterprise applications: ondemand available in seconds with payasyougo pricing From data warehousing to deployment tools directories to content delivery over 200 AWS services are available New services can be provisioned quickly without the upfront capital expense This allows enterprises startups small and mediumsized businesses and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform Introduction In 2006 Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business With the cloud businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Today AWS provides a highly reliable scalable lowcost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world 1Overview of Amazon Web Services AWS Whitepaper What Is Cloud Computing? Cloud computing is the ondemand delivery of compute power database storage applications and other IT resources through a cloud services platform via the Internet with payasyougo pricing Whether you are running applications that share photos to millions of mobile users or you’re supporting the critical operations of your business a cloud services platform provides rapid access to flexible and lowcost IT resources With cloud computing you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware Instead you can provision exactly the right type and size of computing resources you need to power your newest bright idea or operate your IT department You can access as many resources as you need almost instantly and only pay for what you use Cloud computing provides a simple way to access servers storage databases and a broad set of application services over the Internet A cloud services platform such as Amazon Web Services owns and maintains the networkconnected hardware required for these application services while you provision and use what you need via a web application 2Overview of Amazon Web Services AWS Whitepaper Six Advantages of Cloud Computing •Trade capital expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them you can pay only when you consume computing resources and pay only for how much you consume •Benefit from massive economies of scale – By using cloud computing you can achieve a lower variable cost than you can get on your own Because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale which translates into lower pay asyougo prices •Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs When you make a capacity decision prior to deploying an application you often end up either sitting on expensive idle resources or dealing with limited capacity With cloud computing these problems go away You can access as much or as little capacity as you need and scale up and down as required with only a few minutes’ notice •Increase speed and agility – In a cloud computing environment new IT resources are only a click away which means that you reduce the time to make those resources available to your developers from weeks to just minutes This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower •Stop spending money running and maintaining data centers – Focus on projects that differentiate your business not the infrastructure Cloud computing lets you focus on your own customers rather than on the heavy lifting of racking stacking and powering servers •Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks This means you can provide lower latency and a better experience for your customers at minimal cost 3Overview of Amazon Web Services AWS Whitepaper Cloud Computing Models Types of Cloud Computing Cloud computing provides developers and IT departments with the ability to focus on what matters most and avoid undifferentiated work such as procurement maintenance and capacity planning As cloud computing has grown in popularity several different models and deployment strategies have emerged to help meet specific needs of different users Each type of cloud service and deployment method provides you with different levels of control flexibility and management Understanding the differences between Infrastructure as a Service Platform as a Service and Software as a Service as well as what deployment strategies you can use can help you decide what set of services is right for your needs Cloud Computing Models Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT and typically provides access to networking features computers (virtual or on dedicated hardware) and data storage space IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today Platform as a Service (PaaS) Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications This helps you be more efficient as you don’t need to worry about resource procurement capacity planning software maintenance patching or any of the other undifferentiated heavy lifting involved in running your application Software as a Service (SaaS) Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider In most cases people referring to Software as a Service are referring to enduser applications With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software A common example of a SaaS application is webbased email which you can use to send and receive email without having to manage feature additions to the email product or maintain the servers and operating systems that the email program is running on Cloud Computing Deployment Models Cloud A cloudbased application is fully deployed in the cloud and all parts of the application run in the cloud Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing Cloudbased applications can be built on lowlevel infrastructure pieces or can use higher level services that provide abstraction from the management architecting and scaling requirements of core infrastructure 4Overview of Amazon Web Services AWS Whitepaper Hybrid Hybrid A hybrid deployment is a way to connect infrastructure and applications between cloudbased resources and existing resources that are not located in the cloud The most common method of hybrid deployment is between the cloud and existing onpremises infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system For more information on how AWS can help you with your hybrid deployment visit our Hybrid Cloud with AWS page Onpremises The deployment of resources onpremises using virtualization and resource management tools is sometimes called the “private cloud” Onpremises deployment doesn’t provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase resource utilization For more information on how AWS can help see Use case: Cloud services onpremises 5Overview of Amazon Web Services AWS Whitepaper Global Infrastructure AWS serves over a million active customers in more than 240 countries and territories We are steadily expanding global infrastructure to help our customers achieve lower latency and higher throughput and to ensure that their data resides only in the AWS Region they specify As our customers grow their businesses AWS will continue to provide infrastructure that meets their global requirements The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the ability to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates in 80 Availability Zones within 25 geographic Regions around the world with announced plans for more Availability Zones and Regions For more information on the AWS Cloud Availability Zones and AWS Regions see AWS Global Infrastructure Each Amazon Region is designed to be completely isolated from the other Amazon Regions This achieves the greatest possible fault tolerance and stability Each Availability Zone is isolated but the Availability Zones in a Region are connected through lowlatency links AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region Each Availability Zone is designed as an independent failure zone This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by AWS Region) In addition to discrete uninterruptible power supply (UPS) and onsite backup generation facilities data centers located in different Availability Zones are designed to be supplied by independent substations to reduce the risk of an event on the power grid impacting more than one Availability Zone Availability Zones are all redundantly connected to multiple tier1 transit providers 6Overview of Amazon Web Services AWS Whitepaper Security Security and Compliance Security Cloud security at AWS is the highest priority As an AWS customer you will benefit from a data center and network architecture built to meet the requirements of the most securitysensitive organizations Security in the cloud is much like security in your onpremises data centers—only without the costs of maintaining facilities and hardware In the cloud you don’t have to manage physical servers or storage devices Instead you use softwarebased security tools to monitor and protect the flow of information into and out of your cloud resources An advantage of the AWS Cloud is that it allows you to scale and innovate while maintaining a secure environment and paying only for the services you use This means that you can have the security you need at a lower cost than in an onpremises environment As an AWS customer you inherit all the best practices of AWS policies architecture and operational processes built to satisfy the requirements of our most securitysensitive customers Get the flexibility and agility you need in security controls The AWS Cloud enables a shared responsibility model While AWS manages security of the cloud you are responsible for security in the cloud This means that you retain control of the security you choose to implement to protect your own content platform applications systems and networks no differently than you would in an onsite data center AWS provides you with guidance and expertise through online resources personnel and partners AWS provides you with advisories for current issues plus you have the opportunity to work with AWS when you encounter security issues You get access to hundreds of tools and features to help you to meet your security objectives AWS provides securityspecific tools and features across network security configuration management access control and data encryption Finally AWS environments are continuously audited with certifications from accreditation bodies across geographies and verticals In the AWS environment you can take advantage of automated tools for asset inventory and privileged access reporting Benefits of AWS Security •Keep Your Data Safe: The AWS infrastructure puts strong safeguards in place to help protect your privacy All data is stored in highly secure AWS data centers •Meet Compliance Requirements: AWS manages dozens of compliance programs in its infrastructure This means that segments of your compliance have already been completed •Save Money: Cut costs by using AWS data centers Maintain the highest standard of security without having to manage your own facility •Scale Quickly: Security scales with your AWS Cloud usage No matter the size of your business the AWS infrastructure is designed to keep your data safe Compliance AWS Cloud Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS Cloud infrastructure 7Overview of Amazon Web Services AWS Whitepaper Compliance compliance responsibilities will be shared By tying together governancefocused auditfriendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs This helps customers to establish and operate in an AWS security control environment The IT infrastructure that AWS provides to its customers is designed and managed in alignment with best security practices and a variety of IT security standards The following is a partial list of assurance programs with which AWS complies: •SOC 1/ISAE 3402 SOC 2 SOC 3 •FISMA DIACAP and FedRAMP •PCI DSS Level 1 •ISO 9001 ISO 27001 ISO 27017 ISO 27018 AWS provides customers a wide range of information on its IT control environment in whitepapers reports certifications accreditations and other thirdparty attestations More information is available in the Risk and Compliance whitepaper and the AWS Security Center 8Overview of Amazon Web Services AWS Whitepaper AWS Management Console Amazon Web Services Cloud Topics •AWS Management Console (p 9) •AWS Command Line Interface (p 9) •Software Development Kits (p 10) •Analytics (p 10) •Application Integration (p 14) •AR and VR (p 16) •Blockchain (p 17) •Business Applications (p 17) •Cloud Financial Management (p 19) •Compute Services (p 20) •Contact Center (p 24) •Containers (p 25) •Database (p 26) •Developer Tools (p 29) •End User Computing (p 31) •FrontEnd Web & Mobile Services (p 32) •Game Tech (p 34) •Internet of Things (IoT) (p 34) •Machine Learning (p 39) •Management and Governance (p 48) •Media Services (p 54) •Migration and Transfer (p 57) •Networking and Content Delivery (p 60) •Quantum Technologies (p 64) •Robotics (p 64) •Satellite (p 65) •Security Identity and Compliance (p 65) •Storage (p 72) AWS Management Console Access and manage Amazon Web Services through the AWS Management Console a simple and intuitive user interface You can also use the AWS Console Mobile Application to quickly view resources on the go AWS Command Line Interface The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services With just one tool to download and configure you can control multiple AWS services from the command line and automate them through scripts 9Overview of Amazon Web Services AWS Whitepaper Software Development Kits Software Development Kits Our Software Development Kits (SDKs) simplify using AWS services in your applications with an Application Program Interface (API) tailored to your programming language or platform Analytics Topics •Amazon Athena (p 10) •Amazon CloudSearch (p 10) •Amazon Elasticsearch Service (p 11) •Amazon EMR (p 11) •Amazon FinSpace (p 11) •Amazon Kinesis (p 11) •Amazon Kinesis Data Firehose (p 12) •Amazon Kinesis Data Analytics (p 12) •Amazon Kinesis Data Streams (p 12) •Amazon Kinesis Video Streams (p 12) •Amazon Redshift (p 12) •Amazon QuickSight (p 13) •AWS Data Exchange (p 13) •AWS Data Pipeline (p 13) •AWS Glue (p 13) •AWS Lake Formation (p 14) •Amazon Managed Streaming for Apache Kafka (Amazon MSK) (p 14) Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is serverless so there is no infrastructure to manage and you pay only for the queries that you run Athena is easy to use Simply point to your data in Amazon S3 define the schema and start querying using standard SQL Most results are delivered within seconds With Athena there’s no need for complex extract transform and load (ETL) jobs to prepare your data for analysis This makes it easy for anyone with SQL skills to quickly analyze largescale datasets Athena is outofthebox integrated with AWS Glue Data Catalog allowing you to create a unified metadata repository across various services crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions and maintain schema versioning Amazon CloudSearch Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and costeffective to set up manage and scale a search solution for your website or application Amazon CloudSearch 10Overview of Amazon Web Services AWS Whitepaper Amazon Elasticsearch Service supports 34 languages and popular search features such as highlighting autocomplete and geospatial search Amazon Elasticsearch Service Amazon Elasticsearch Service makes it easy to deploy secure operate and scale Elasticsearch to search analyze and visualize data in realtime With Amazon Elasticsearch Service you get easytouse APIs and realtime analytics capabilities to power usecases such as log analytics fulltext search application monitoring and clickstream analytics with enterprisegrade availability scalability and security The service offers integrations with opensource tools like Kibana and Logstash for data ingestion and visualization It also integrates seamlessly with other AWS services such as Amazon Virtual Private Cloud (Amazon VPC) AWS Key Management Service (AWS KMS) Amazon Kinesis Data Firehose AWS Lambda AWS Identity and Access Management (IAM) Amazon Cognito and Amazon CloudWatch so that you can go from raw data to actionable insights quickly Amazon EMR Amazon EMR is the industryleading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark Apache Hive Apache HBase Apache Flink Apache Hudi and Presto Amazon EMR makes it easy to set up operate and scale your big data environments by automating timeconsuming tasks like provisioning capacity and tuning clusters With EMR you can run petabytescale analysis at less than half of the cost of traditional onpremises solutions andover 3x faster than standard Apache Spark You can run workloads on Amazon EC2 instances on Amazon Elastic Kubernetes Service (EKS) clusters or onpremises using EMR on AWS Outposts Amazon FinSpace Amazon FinSpace is a data management and analytics service purposebuilt for the financial services industry (FSI) FinSpace reduces the time you spend finding and preparing petabytes of financial data to be ready for analysis from months to minutes Financial services organizations analyze data from internal data stores like portfolio actuarial and risk management systems as well as petabytes of data from thirdparty data feeds such as historical securities prices from stock exchanges It can take months to find the right data get permissions to access the data in a compliant way and prepare it for analysis FinSpace removes the heavy lifting of building and maintaining a data management system for financial analytics With FinSpace you collect data and catalog it by relevant business concepts such as asset class risk classification or geographic region FinSpace makes it easy to discover and share data across your organization in accordance with your compliance requirements You define your data access policies in one place and FinSpace enforces them while keeping audit logs to allow for compliance and activity reporting FinSpace also includes a library of 100+ functions like time bars and Bollinger bands for you to prepare data for analysis Amazon Kinesis Amazon Kinesis makes it easy to collect process and analyze realtime streaming data so you can get timely insights and react quickly to new information Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale along with the flexibility to choose the tools that best suit the requirements of your application With Amazon Kinesis you can ingest realtime data such as video audio application logs website clickstreams and IoT telemetry data for machine learning analytics and other applications Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin 11Overview of Amazon Web Services AWS Whitepaper Amazon Kinesis Data Firehose Amazon Kinesis currently offers four services: Kinesis Data Firehose Kinesis Data Analytics Kinesis Data Streams and Kinesis Video Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data stores and analytics tools It can capture transform and load streaming data into Amazon S3 Amazon Redshift Amazon Elasticsearch Service and Splunk enabling near realtime analytics with existing business intelligence tools and dashboards you’re already using today It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration It can also batch compress transform and encrypt the data before loading it minimizing the amount of storage used at the destination and increasing security You can easily create a Firehose delivery stream from the AWS Management Console configure it with a few clicks and start sending data to the stream from hundreds of thousands of data sources to be loaded continuously to AWS—all in just a few minutes You can also configure your delivery stream to automatically convert the incoming data to columnar formats like Apache Parquet and Apache ORC before the data is delivered to Amazon S3 for costeffective storage and analytics Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics is the easiest way to analyze streaming data gain actionable insights and respond to your business and customer needs in real time Amazon Kinesis Data Analytics reduces the complexity of building managing and integrating streaming applications with other AWS services SQL users can easily query streaming data or build entire streaming applications using templates and an interactive SQL editor Java developers can quickly build sophisticated streaming applications using open source Java libraries and AWS integrations to transform and analyze data in realtime Amazon Kinesis Data Analytics takes care of everything required to run your queries continuously and scales automatically to match the volume and throughput rate of your incoming data Amazon Kinesis Data Streams Amazon Kinesis Data Streams is a massively scalable and durable realtime data streaming service KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams database event streams financial transactions social media feeds IT logs and locationtracking events The data collected is available in milliseconds to enable realtime analytics use cases such as realtime dashboards realtime anomaly detection dynamic pricing and more Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics machine learning (ML) playback and other processing Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices It also durably stores encrypts and indexes video data in your streams and allows you to access your data through easytouse APIs Kinesis Video Streams enables you to playback video for live and ondemand viewing and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and libraries for ML frameworks such as Apache MxNet TensorFlow and OpenCV Amazon Redshift Amazon Redshift is the most widely used cloud data warehouse It makes it fast simple and cost effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools 12Overview of Amazon Web Services AWS Whitepaper Amazon QuickSight It allows you to run complex analytic queries against terabytes to petabytes of structured and semi structured data using sophisticated query optimization columnar storage on highperformance storage and massively parallel query execution Most results come back in seconds You can start small for just $025 per hour with no commitments and scale out to petabytes of data for $1000 per terabyte per year less than a tenth the cost of traditional onpremises solutions Amazon QuickSight Amazon QuickSight is a fast cloudpowered business intelligence (BI) service that makes it easy for you to deliver insights to everyone in your organization QuickSight lets you create and publish interactive dashboards that can be accessed from browsers or mobile devices You can embed dashboards into your applications providing your customers with powerful selfservice analytics QuickSight easily scales to tens of thousands of users without any software to install servers to deploy or infrastructure to manage AWS Data Exchange AWS Data Exchange makes it easy to find subscribe to and use thirdparty data in the cloud Qualified data providers include categoryleading brands such as Reuters who curate data from over 22 million unique news stories per year in multiple languages; Change Healthcare who process and anonymize more than 14 billion healthcare transactions and $1 trillion in claims annually; Dun & Bradstreet who maintain a database of more than 330 million global business records; and Foursquare whose location data is derived from 220 million unique consumers and includes more than 60 million global commercial venues Once subscribed to a data product you can use the AWS Data Exchange API to load data directly into Amazon S3 and then analyze it with a wide variety of AWS analytics and machine learning services For example property insurers can subscribe to data to analyze historical weather patterns to calibrate insurance coverage requirements in different geographies; restaurants can subscribe to population and location data to identify optimal regions for expansion; academic researchers can conduct studies on climate change by subscribing to data on carbon dioxide emissions; and healthcare professionals can subscribe to aggregated data from historical clinical trials to accelerate their research activities For data providers AWS Data Exchange makes it easy to reach the millions of AWS customers migrating to the cloud by removing the need to build and maintain infrastructure for data storage delivery billing and entitling AWS Data Pipeline AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as onpremises data sources at specified intervals With AWS Data Pipeline you can regularly access your data where it’s stored transform and process it at scale and efficiently transfer the results to AWS services such as Amazon S3 (p 74) Amazon RDS (p 28) Amazon DynamoDB (p 26) and Amazon EMR (p 11) AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant repeatable and highly available You don’t have to worry about ensuring resource availability managing intertask dependencies retrying transient failures or timeouts in individual tasks or creating a failure notification system AWS Data Pipeline also allows you to move and process data that was previously locked up in onpremises data silos AWS Glue AWS Glue is a fully managed extract transform and load (ETL) service that makes it easy for customers to prepare and load their data for analytics You can create and run an ETL job with a few clicks in the 13Overview of Amazon Web Services AWS Whitepaper AWS Lake Formation AWS Management Console You simply point AWS Glue to your data stored on AWS and AWS Glue discovers your data and stores the associated metadata (eg table definition and schema) in the AWS Glue Data Catalog Once cataloged your data is immediately searchable queryable and available for ETL AWS Lake Formation AWS Lake Formation is a service that makes it easy to set up a secure data lake in days A data lake is a centralized curated and secured repository that stores all your data both in its original form and prepared for analysis A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions However setting up and managing data lakes today involves a lot of manual complicated and time consuming tasks This work includes loading data from diverse sources monitoring those data flows setting up partitions turning on encryption and managing keys defining transformation jobs and monitoring their operation reorganizing data into a columnar format configuring access control settings deduplicating redundant data matching linked records granting access to data sets and auditing access over time Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply Lake Formation then collects and catalogs data from databases and object storage moves the data into your new Amazon S3 data lake cleans and classifies data using machine learning algorithms and secures access to your sensitive data Your users can then access a centralized catalog of data which describes available data sets and their appropriate usage Your users then leverage these data sets with their choice of analytics and machine learning services like Amazon EMR for Apache Spark Amazon Redshift Amazon Athena SageMaker and Amazon QuickSight Amazon Managed Streaming for Apache Kafka (Amazon MSK) Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data Apache Kafka is an opensource platform for building realtime streaming data pipelines and applications With Amazon MSK you can use Apache Kafka APIs to populate data lakes stream changes to and from databases and power machine learning and analytics applications Apache Kafka clusters are challenging to setup scale and manage in production When you run Apache Kafka on your own you need to provision servers configure Apache Kafka manually replace servers when they fail orchestrate server patches and upgrades architect the cluster for high availability ensure data is durably stored and secured setup monitoring and alarms and carefully plan scaling events to support load changes Amazon MSK makes it easy for you to build and run production applications on Apache Kafka without needing Apache Kafka infrastructure management expertise That means you spend less time managing infrastructure and more time building applications With a few clicks in the Amazon MSK console you can create highly available Apache Kafka clusters with settings and configuration based on Apache Kafka’s deployment best practices Amazon MSK automatically provisions and runs your Apache Kafka clusters Amazon MSK continuously monitors cluster health and automatically replaces unhealthy nodes with no downtime to your application In addition Amazon MSK secures your Apache Kafka cluster by encrypting data at rest Application Integration Topics 14Overview of Amazon Web Services AWS Whitepaper AWS Step Functions •AWS Step Functions (p 15) •Amazon AppFlow (p 15) •Amazon EventBridge (p 15) •Amazon Managed Workflows for Apache Airflow (MWAA) (p 15) •Amazon MQ (p 16) •Amazon Simple Notification Service (p 16) •Amazon Simple Queue Service (p 16) •Amazon Simple Workflow Service (p 16) AWS Step Functions AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows Building applications from individual components that each perform a discrete function lets you scale easily and change applications quickly Step Functions is a reliable way to coordinate components and step through the functions of your application Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps This makes it simple to build and run multistep applications Step Functions automatically triggers and tracks each step and retries when there are errors so your application runs in order and as expected Step Functions logs the state of each step so when things do go wrong you can diagnose and debug problems quickly You can change and add steps without even writing code so you can easily evolve your application and innovate faster Amazon AppFlow Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between SoftwareasaService (SaaS) applications like Salesforce Zendesk Slack and ServiceNow and AWS services like Amazon S3 and Amazon Redshift in just a few clicks With Amazon AppFlow you can run data flows at enterprise scale at the frequency you choose on a schedule in response to a business event or on demand You can configure data transformation capabilities like filtering and validation to generate rich readytouse data as part of the flow itself without additional steps Amazon AppFlow automatically encrypts data in motion and allows users to restrict data from flowing over the public Internet for SaaS applications that are integrated with AWS PrivateLink reducing exposure to security threats Amazon EventBridge Amazon EventBridge is a serverless event bus that makes it easier to build eventdriven applications at scale using events generated from your applications integrated SoftwareasaService (SaaS) applications and AWS services EventBridge delivers a stream of realtime data from event sources such as Zendesk or Shopify to targets like AWS Lambda and other SaaS applications You can set up routing rules to determine where to send your data to build application architectures that react in realtime to your data sources with event publisher and consumer completely decoupled Amazon Managed Workflows for Apache Airflow (MWAA) Amazon Managed Workflows for Apache Airflow (MWAA) is a managed orchestration service for Apache Airflow that makes it easier to set up and operate endtoend data pipelines in the cloud at scale Apache Airflow is an opensource tool used to programmatically author schedule and monitor sequences of 15Overview of Amazon Web Services AWS Whitepaper Amazon MQ processes and tasks referred to as “workflows” With Managed Workflows you can use Airflow and Python to create workflows without having to manage the underlying infrastructure for scalability availability and security Managed Workflows automatically scales its workflow execution capacity to meet your needs and is integrated with AWS security services to help provide you with fast and secure access to data Amazon MQ Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud Message brokers allow different software systems–often using different programming languages and on different platforms–to communicate and exchange information Amazon MQ reduces your operational load by managing the provisioning setup and maintenance of ActiveMQ and RabbitMQ popular opensource message brokers Connecting your current applications to Amazon MQ is easy because it uses industrystandard APIs and protocols for messaging including JMS NMS AMQP STOMP MQTT and WebSocket Using standards means that in most cases there’s no need to rewrite any messaging code when you migrate to AWS Amazon Simple Notification Service Amazon Simple Notification Service (Amazon SNS) is a highly available durable secure fully managed pub/sub messaging service that enables you to decouple microservices distributed systems and serverless applications Amazon SNS provides topics for highthroughput pushbased manytomany messaging Using Amazon SNS topics your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing including Amazon SQS queues AWS Lambda functions and HTTP/S webhooks Additionally SNS can be used to fan out notifications to end users using mobile push SMS and email Amazon Simple Queue Service Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices distributed systems and serverless applications SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware and empowers developers to focus on differentiating work Using SQS you can send store and receive messages between software components at any volume without losing messages or requiring other services to be available Get started with SQS in minutes using the AWS console Command Line Interface or SDK of your choice and three simple commands SQS offers two types of message queues Standard queues offer maximum throughput besteffort ordering and atleastonce delivery SQS FIFO queues are designed to guarantee that messages are processed exactly once in the exact order that they are sent Amazon Simple Workflow Service Amazon Simple Workflow Service (Amazon SWF) helps developers build run and scale background jobs that have parallel or sequential steps You can think of Amazon SWF as a fullymanaged state tracker and task coordinator in the cloud If your application’s steps take more than 500 milliseconds to complete you need to track the state of processing If you need to recover or retry if a task fails Amazon SWF can help you AR and VR Topics 16Overview of Amazon Web Services AWS Whitepaper Amazon Sumerian •Amazon Sumerian (p 17) Amazon Sumerian Amazon Sumerian lets you create and run virtual reality (VR) augmented reality (AR) and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise With Sumerian you can build highly immersive and interactive scenes that run on popular hardware such as Oculus Go Oculus Rift HTC Vive HTC Vive Pro Google Daydream and Lenovo Mirage as well as Android and iOS mobile devices For example you can build a virtual classroom that lets you train new employees around the world or you can build a virtual environment that enables people to tour a building remotely Sumerian makes it easy to create all the building blocks needed to build highly immersive and interactive 3D experiences including adding objects (eg characters furniture and landscape) and designing animating and scripting environments Sumerian does not require specialized expertise and you can design scenes directly from your browser Blockchain Topics •Amazon Managed Blockchain (p 17) Amazon Managed Blockchain Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using the popular open source frameworks Hyperledger Fabric and Ethereum Blockchain makes it possible to build applications where multiple parties can execute transactions without the need for a trusted central authority Today building a scalable blockchain network with existing technologies is complex to set up and hard to manage To create a blockchain network each network member needs to manually provision hardware install software create and manage certificates for access control and configure networking components Once the blockchain network is running you need to continuously monitor the infrastructure and adapt to changes such as an increase in transaction requests or new members joining or leaving the network Amazon Managed Blockchain is a fully managed service that allows you to set up and manage a scalable blockchain network with just a few clicks Amazon Managed Blockchain eliminates the overhead required to create the network and automatically scales to meet the demands of thousands of applications running millions of transactions Once your network is up and running Managed Blockchain makes it easy to manage and maintain your blockchain network It manages your certificates lets you easily invite new members to join the network and tracks operational metrics such as usage of compute memory and storage resources In addition Managed Blockchain can replicate an immutable copy of your blockchain network activity into Amazon Quantum Ledger Database (QLDB) a fully managed ledger database This allows you to easily analyze the network activity outside the network and gain insights into trends Business Applications Topics •Alexa for Business (p 18) •Amazon Chime (p 18) 17Overview of Amazon Web Services AWS Whitepaper Alexa for Business •Amazon SES (p 18) •Amazon WorkDocs (p 18) •Amazon WorkMail (p 18) Alexa for Business Alexa for Business is a service that enables organizations and employees to use Alexa to get more work done With Alexa for Business employees can use Alexa as their intelligent assistant to be more productive in meeting rooms at their desks and even with the Alexa devices they already have at home Amazon Chime Amazon Chime is a communications service that transforms online meetings with a secure easytouse application that you can trust Amazon Chime works seamlessly across your devices so that you can stay connected You can use Amazon Chime for online meetings video conferencing calls chat and to share content both inside and outside your organization Amazon Chime works with Alexa for Business which means you can use Alexa to start your meetings with your voice Alexa can start your video meetings in large conference rooms and automatically dial into online meetings in smaller huddle rooms and from your desk Amazon SES Amazon Simple Email Service (Amazon SES) is a costeffective flexible and scalable email service that enables developers to send mail from within any application You can configure Amazon SES quickly to support several email use cases including transactional marketing or mass email communications Amazon SES's flexible IP deployment and email authentication options help drive higher deliverability and protect sender reputation while sending analytics measure the impact of each email With Amazon SES you can send email securely globally and at scale Amazon WorkDocs Amazon WorkDocs is a fully managed secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity Users can comment on files send them to others for feedback and upload new versions without having to resort to emailing multiple versions of their files as attachments Users can take advantage of these capabilities wherever they are using the device of their choice including PCs Macs tablets and phones Amazon WorkDocs offers IT administrators the option of integrating with existing corporate directories flexible sharing policies and control of the location where data is stored You can get started using Amazon WorkDocs with a 30day free trial providing 1 TB of storage per user for up to 50 users Amazon WorkMail Amazon WorkMail is a secure managed business email and calendar service with support for existing desktop and mobile email client applications Amazon WorkMail gives users the ability to seamlessly access their email contacts and calendars using the client application of their choice including Microsoft Outlook native iOS and Android email applications any client application supporting the IMAP protocol or directly through a web browser You can integrate Amazon WorkMail with your existing corporate directory use email journaling to meet compliance requirements and control both the keys that encrypt your data and the location in which your data is stored You can also set up interoperability with Microsoft Exchange Server and programmatically manage users groups and resources using the Amazon WorkMail SDK 18Overview of Amazon Web Services AWS Whitepaper Cloud Financial Management Cloud Financial Management Topics •AWS Application Cost Profiler (p 19) •AWS Cost Explorer (p 19) •AWS Budgets (p 19) •AWS Cost & Usage Report (p 19) •Reserved Instance (RI) Reporting (p 20) •Savings Plans (p 20) AWS Application Cost Profiler AWS Application Cost Profiler provides you the ability to track the consumption of shared AWS resources used by software applications and report granular cost breakdown across tenant base You can achieve economies of scale with the shared infrastructure model while still maintaining a clear line of sight to detailed resource consumption information across multiple dimensions With the proportionate cost insights of shared AWS resources organizations running applications can establish the data foundation for accurate cost allocation model and ISV selling applications can better understand your profitability and customize pricing strategies for your end customers AWS Cost Explorer AWS Cost Explorer has an easytouse interface that lets you visualize understand and manage your AWS costs and usage over time Get started quickly by creating custom reports (including charts and tabular data) that analyze cost and usage data both at a high level (eg total costs and usage across all accounts) and for highlyspecific requests (eg m22xlarge costs within account Y that are tagged “project: secretProject”) AWS Budgets AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount You can also use AWS Budgets to set RI utilization or coverage targets and receive alerts when your utilization drops below the threshold you define RI alerts support Amazon EC2 Amazon RDS Amazon Redshift and Amazon ElastiCache reservations Budgets can be tracked at the monthly quarterly or yearly level and you can customize the start and end dates You can further refine your budget to track costs associated with multiple dimensions such as AWS service linked account tag and others Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic Budgets can be created and tracked from the AWS Budgets dashboard or via the Budgets API AWS Cost & Usage Report The AWS Cost & Usage Report is a single location for accessing comprehensive information about your AWS costs and usage The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items as well as any tags that you have activated for cost allocation purposes You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or monthly level 19Overview of Amazon Web Services AWS Whitepaper Reserved Instance (RI) Reporting Reserved Instance (RI) Reporting AWS provides a number of RIspecific cost management solutions outofthebox to help you better understand and manage your RIs Using the RI Utilization and Coverage reports available in AWS Cost Explorer you can visualize your RI data at an aggregate level or inspect a particular RI subscription To access the most detailed RI information available you can leverage the AWS Cost & Usage Report You can also set a custom RI utilization target via AWS Budgets and receive alerts when your utilization drops below the threshold you define Savings Plans Savings Plans is a flexible pricing model offering lower prices compared to OnDemand pricing in exchange for a specific usage commitment (measured in $/hour) for a one or threeyear period AWS offers three types of Savings Plans – Compute Savings Plans EC2 Instance Savings Plans and Amazon SageMaker Savings Plans Compute Savings Plans apply to usage across Amazon EC2 AWS Lambda and AWS Fargate The EC2 Instance Savings Plans apply to EC2 usage and Amazon SageMaker Savings Plans apply to Amazon SageMaker usage You can easily sign up a 1 or 3year term Savings Plans in AWS Cost Explorer and manage your plans by taking advantage of recommendations performance reporting and budget alerts Compute Services Topics •Amazon EC2 (p 20) •Amazon EC2 Auto Scaling (p 21) •Amazon EC2 Image Builder (p 21) •Amazon Lightsail (p 22) •AWS App Runner (p 22) •AWS Batch (p 22) •AWS Elastic Beanstalk (p 22) •AWS Fargate (p 22) •AWS Lambda (p 23) •AWS Serverless Application Repository (p 23) •AWS Outposts (p 23) •AWS Wavelength (p 23) •VMware Cloud on AWS (p 24) Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure resizable compute capacity in the cloud It is designed to make webscale computing easier for developers The simple web interface of Amazon EC2 allows you to obtain and configure capacity with minimal friction It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon EC2 reduces the time required to obtain and boot new server instances (called Amazon EC2 instances) to minutes allowing you to quickly scale capacity both up and down as your computing requirements change Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use Amazon EC2 provides developers and system administrators the tools to build failure resilient applications and isolate themselves from common failure scenarios 20Overview of Amazon Web Services AWS Whitepaper Amazon EC2 Auto Scaling Instance Types Amazon EC2 passes on to you the financial benefits of Amazon’s scale You pay a very low rate for the compute capacity you actually consume See Amazon EC2 Instance Purchasing Options for a more detailed description •OnDemand Instances— With OnDemand instances you pay for compute capacity by the hour or the second depending on which instances you run No longerterm commitments or upfront payments are needed You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified per hourly rates for the instance you use OnDemand instances are recommended for: •Users that prefer the low cost and flexibility of Amazon EC2 without any upfront payment or long term commitment •Applications with shortterm spiky or unpredictable workloads that cannot be interrupted •Applications being developed or tested on Amazon EC2 for the first time •Spot Instances—Spot Instances are available at up to a 90% discount compared to OnDemand prices and let you take advantage of unused Amazon EC2 capacity in the AWS Cloud You can significantly reduce the cost of running your applications grow your application’s compute capacity and throughput for the same budget and enable new types of cloud computing applications Spot instances are recommended for: •Applications that have flexible start and end times •Applications that are only feasible at very low compute prices •Users with urgent computing needs for large amounts of additional capacity •Reserved Instances—Reserved Instances provide you with a significant discount (up to 72%) compared to OnDemand instance pricing You have the flexibility to change families operating system types and tenancies while benefitting from Reserved Instance pricing when you use Convertible Reserved Instances •Savings Plans—Savings Plans are a flexible pricing model that offer low prices on EC2 and Fargate usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term •Dedicated Hosts —A Dedicated Host is a physical EC2 server dedicated for your use Dedicated Hosts can help you reduce costs by allowing you to use your existing serverbound software licenses including Windows Server SQL Server and SUSE Linux Enterprise Server (subject to your license terms) and can also help you meet compliance requirements Amazon EC2 Auto Scaling Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define You can use the fleet management features of Amazon EC2 Auto Scaling to maintain the health and availability of your fleet You can also use the dynamic and predictive scaling features of Amazon EC2 Auto Scaling to add or remove EC2 instances Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand Dynamic scaling and predictive scaling can be used together to scale faster Amazon EC2 Image Builder EC2 Image Builder simplifies the building testing and deployment of Virtual Machine and container images for use on AWS or onpremises Keeping Virtual Machine and container images uptodate can be time consuming resource intensive and errorprone Currently customers either manually update and snapshot VMs or have teams that build automation scripts to maintain images 21Overview of Amazon Web Services AWS Whitepaper Amazon Lightsail Image Builder significantly reduces the effort of keeping images uptodate and secure by providing a simple graphical interface builtin automation and AWSprovided security settings With Image Builder there are no manual steps for updating an image nor do you have to build your own automation pipeline Image Builder is offered at no cost other than the cost of the underlying AWS resources used to create store and share the images Amazon Lightsail Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS Lightsail plans include everything you need to jumpstart your project – a virtual machine SSD based storage data transfer DNS management and a static IP address – for a low predictable price AWS App Runner AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web applications and APIs at scale and with no prior infrastructure experience required Start with your source code or a container image App Runner automatically builds and deploys the web application and load balances traffic with encryption App Runner also scales up or down automatically to meet your traffic needs With App Runner rather than thinking about servers or scaling you have more time to focus on your applications AWS Batch AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the optimal quantity and type of compute resources (eg CPU or memoryoptimized instances) based on the volume and specific resource requirements of the batch jobs submitted With AWS Batch there is no need to install and manage batch computing software or server clusters that you use to run your jobs allowing you to focus on analyzing results and solving problems AWS Batch plans schedules and runs your batch computing workloads across the full range of AWS compute services and features such as Amazon EC2 and Spot Instances AWS Elastic Beanstalk AWS Elastic Beanstalk is an easytouse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and Internet Information Services (IIS) You can simply upload your code and AWS Elastic Beanstalk automatically handles the deployment from capacity provisioning load balancing and auto scaling to application health monitoring At the same time you retain full control over the AWS resources powering your application and can access the underlying resources at any time AWS Fargate AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters With AWS Fargate you no longer have to provision configure and scale clusters of virtual machines to run containers This removes the need to choose server types decide when to scale your clusters or optimize cluster packing AWS Fargate removes the need for you to interact with or think about servers or clusters Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them Amazon ECS has two modes: Fargate launch type and EC2 launch type With Fargate launch type all you have to do is package your application in containers specify the CPU and memory requirements 22Overview of Amazon Web Services AWS Whitepaper AWS Lambda define networking and IAM policies and launch the application EC2 launch type allows you to have serverlevel more granular control over the infrastructure that runs your container applications With EC2 launch type you can use Amazon ECS to manage a cluster of servers and schedule placement of containers on the servers Amazon ECS keeps track of all the CPU memory and other resources in your cluster and also finds the best server for a container to run on based on your specified resource requirements You are responsible for provisioning patching and scaling clusters of servers You can decide which type of server to use which applications and how many containers to run in a cluster to optimize utilization and when you should add or remove servers from a cluster EC2 launch type gives you more control of your server clusters and provides a broader range of customization options which might be required to support some specific applications or possible compliance and government requirements AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume—there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service—all with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or you can call it directly from any web or mobile app AWS Serverless Application Repository The AWS Serverless Application Repository enables you to quickly deploy code samples components and complete applications for common use cases such as web and mobile backends event and data processing logging monitoring IoT and more Each application is packaged with an AWS Serverless Application Model (SAM) template that defines the AWS resources used Publicly shared applications also include a link to the application’s source code There is no additional charge to use the Serverless Application Repository you only pay for the AWS resources used in the applications you deploy You can also use the Serverless Application Repository to publish your own applications and share them within your team across your organization or with the community at large To share an application you've built publish it to the AWS Serverless Application Repository AWS Outposts AWS Outposts bring native AWS services infrastructure and operating models to virtually any data center colocation space or onpremises facility You can use the same APIs the same tools the same hardware and the same functionality across onpremises and the cloud to deliver a truly consistent hybrid experience Outposts can be used to support workloads that need to remain onpremises due to low latency or local data processing needs AWS Outposts come in two variants: 1) VMware Cloud on AWS Outposts allows you to use the same VMware control plane and APIs you use to run your infrastructure 2) AWS native variant of AWS Outposts allows you to use the same exact APIs and control plane you use to run in the AWS cloud but onpremises AWS Outposts infrastructure is fully managed maintained and supported by AWS to deliver access to the latest AWS services Getting started is easy you simply log into the AWS Management Console to order your Outposts servers choosing from a wide range of compute and storage options You can order one or more servers or quarter half and full rack units AWS Wavelength AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage 23Overview of Amazon Web Services AWS Whitepaper VMware Cloud on AWS services within communications service providers’ (CSP) datacenters at the edge of the 5G network so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network This avoids the latency that would result from application traffic having to traverse multiple hops across the Internet to reach their destination enabling customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks VMware Cloud on AWS VMware Cloud on AWS is an integrated cloud offering jointly developed by AWS and VMware delivering a highly scalable secure and innovative service that allows organizations to seamlessly migrate and extend their onpremises VMware vSpherebased environments to the AWS Cloud running on nextgeneration Amazon Elastic Compute Cloud (Amazon EC2) bare metal infrastructure VMware Cloud on AWS is ideal for enterprise IT infrastructure and operations organizations looking to migrate their onpremises vSpherebased workloads to the public cloud consolidate and extend their data center capacities and optimize simplify and modernize their disaster recovery solutions VMware Cloud on AWS is delivered sold and supported globally by VMware and its partners with availability in the following AWS Regions: AWS Europe (Stockholm) AWS US East (Northern Virginia) AWS US East (Ohio) AWS US West (Northern California) AWS US West (Oregon) AWS Canada (Central) AWS Europe (Frankfurt) AWS Europe (Ireland) AWS Europe (London) AWS Europe (Paris) AWS Europe (Milan) AWS Asia Pacific (Singapore) AWS Asia Pacific (Sydney) AWS Asia Pacific (Tokyo) AWS Asia Pacific (Mumbai) Region AWS South America (Sao Paulo) AWS Asia Pacific (Seoul) and AWS GovCloud (US West) With each release VMware Cloud on AWS availability will expand into additional global regions VMware Cloud on AWS brings the broad diverse and rich innovations of AWS services natively to the enterprise applications running on VMware's compute storage and network virtualization platforms This allows organizations to easily and rapidly add new innovations to their enterprise applications by natively integrating AWS infrastructure and platform capabilities such as AWS Lambda Amazon Simple Queue Service (SQS) Amazon S3 Elastic Load Balancing Amazon RDS Amazon DynamoDB Amazon Kinesis and Amazon Redshift among many others With VMware Cloud on AWS organizations can simplify their Hybrid IT operations by using the same VMware Cloud Foundation technologies including vSphere vSAN NSX and vCenter Server across their onpremises data centers and on the AWS Cloud without having to purchase any new or custom hardware rewrite applications or modify their operating models The service automatically provisions infrastructure and provides full VM compatibility and workload portability between your onpremises environments and the AWS Cloud With VMware Cloud on AWS you can leverage AWS's breadth of services including compute databases analytics Internet of Things (IoT) security mobile deployment application services and more Contact Center Topics •Amazon Connect (p 24) Amazon Connect Amazon Connect is a selfservice omnichannel cloud contact center service that makes it easy for any business to deliver better customer service at lower cost Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations The selfservice graphical interface in Amazon Connect makes it easy for non technical users to design contact flows manage agents and track performance metrics – no specialized skills required There are no upfront payments or longterm commitments and no infrastructure to manage with Amazon Connect; customers pay by the minute for Amazon Connect usage plus any associated telephony services 24Overview of Amazon Web Services AWS Whitepaper Containers Containers Topics •Amazon Elastic Container Registry (p 25) •Amazon Elastic Container Service (p 25) •Amazon Elastic Kubernetes Service (p 25) •AWS App2Container (p 25) •Red Hat OpenShift Service on AWS (p 26) Amazon Elastic Container Registry Amazon Elastic Container Registry (ECR) is a fullymanaged Docker container registry that makes it easy for developers to store manage and deploy Docker container images Amazon ECR is integrated with Amazon Elastic Container Service (Amazon ECS) simplifying your development to production workflow Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure Amazon ECR hosts your images in a highly available and scalable architecture allowing you to reliably deploy containers for your applications Integration with AWS Identity and Access Management (IAM) (p 69) provides resourcelevel control of each repository With Amazon ECR there are no upfront fees or commitments You pay only for the amount of data you store in your repositories and data transferred to the Internet Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a highly scalable highperformance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS Amazon ECS eliminates the need for you to install and operate your own container orchestration software manage and scale a cluster of virtual machines or schedule containers on those virtual machines With simple API calls you can launch and stop Dockerenabled applications query the complete state of your application and access many familiar features such as IAM roles security groups load balancers Amazon CloudWatch Events AWS CloudFormation templates and AWS CloudTrail logs Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy manage and scale containerized applications using Kubernetes on AWS Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community Applications running on any standard Kubernetes environment are fully compatible and can be easily migrated to Amazon EKS AWS App2Container AWS App2Container (A2C) is a commandline tool for modernizing NET and Java applications into containerized applications A2C analyzes and builds an inventory of all applications running in virtual machines onpremises or in the cloud You simply select the application you want to containerize and A2C packages the application artifact and identified dependencies into container images configures the network ports and generates the ECS task and Kubernetes pod definitions A2C provisions through CloudFormation the cloud infrastructure and CI/CD pipelines required to deploy the containerized NET 25Overview of Amazon Web Services AWS Whitepaper Red Hat OpenShift Service on AWS or Java application into production With A2C you can easily modernize your existing applications and standardize the deployment and operations through containers Red Hat OpenShift Service on AWS Red Hat OpenShift Service on AWS (ROSA) provides an integrated experience to use OpenShift If you are already familiar with OpenShift you can accelerate your application development process by leveraging familiar OpenShift APIs and tools for deployments on AWS With ROSA you can use the wide range of AWS compute database analytics machine learning networking mobile and other services to build secure and scalable applications faster ROSA comes with payasyougo hourly and annual billing a 9995% SLA and joint support from AWS and Red Hat ROSA makes it easier for you to focus on deploying applications and accelerating innovation by moving the cluster lifecycle management to Red Hat and AWS With ROSA you can run containerized applications with your existing OpenShift workflows and reduce the complexity of management Database Topics •Amazon Aurora (p 26) •Amazon DynamoDB (p 26) •Amazon ElastiCache (p 27) •Amazon Keyspaces (for Apache Cassandra) (p 27) •Amazon Neptune (p 27) •Amazon Relational Database Service (p 28) •Amazon RDS on VMware (p 28) •Amazon Quantum Ledger Database (QLDB) (p 28) •Amazon Timestream (p 29) •Amazon DocumentDB (with MongoDB compatibility) (p 29) Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that combines the speed and availability of highend commercial databases with the simplicity and costeffectiveness of open source databases Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases It provides the security availability and reliability of commercial databases at 1/10th the cost Amazon Aurora is fully managed by Amazon Relational Database Service (Amazon RDS) which automates timeconsuming administration tasks like hardware provisioning database setup patching and backups Amazon Aurora features a distributed faulttolerant selfhealing storage system that autoscales up to 128TB per database instance It delivers high performance and availability with up to 15 lowlatency read replicas pointintime recovery continuous backup to Amazon S3 and replication across three Availability Zones (AZs) Amazon DynamoDB Amazon DynamoDB is a keyvalue and document database that delivers singledigit millisecond performance at any scale It's a fully managed multiregion multimaster database with builtin security 26Overview of Amazon Web Services AWS Whitepaper Amazon ElastiCache backup and restore and inmemory caching for internetscale applications DynamoDB can handle more than 10 trillion requests per day and support peaks of more than 20 million requests per second Many of the world's fastest growing businesses such as Lyft Airbnb and Redfin as well as enterprises such as Samsung Toyota and Capital One depend on the scale and performance of DynamoDB to support their missioncritical workloads Hundreds of thousands of AWS customers have chosen DynamoDB as their keyvalue and document database for mobile web gaming ad tech IoT and other applications that need lowlatency data access at any scale Create a new table for your application and let DynamoDB handle the rest Amazon ElastiCache Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an inmemory cache in the cloud The service improves the performance of web applications by allowing you to retrieve information from fast managed inmemory caches instead of relying entirely on slower diskbased databases Amazon ElastiCache supports two opensource inmemory caching engines: •Redis a fast opensource inmemory keyvalue data store for use as a database cache message broker and queue Amazon ElastiCache for Redis is a Rediscompatible inmemory service that delivers the easeofuse and power of Redis along with the availability reliability and performance suitable for the most demanding applications Both singlenode and up to 15shard clusters are available enabling scalability to up to 355 TiB of inmemory data ElastiCache for Redis is fully managed scalable and secure This makes it an ideal candidate to power highperformance use cases such as web mobile apps gaming adtech and IoT •Memcached a widely adopted memory object caching system ElastiCache for Memcached is protocol compliant with Memcached so popular tools that you use today with existing Memcached environments will work seamlessly with the service Amazon Keyspaces (for Apache Cassandra) Amazon Keyspaces (for Apache Cassandra) is a scalable highly available and managed Apache Cassandra–compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today You don’t have to provision patch or manage servers and you don’t have to install maintain or operate software Amazon Keyspaces is serverless so you pay for only the resources you use and the service can automatically scale tables up and down in response to application traffic You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage Data is encrypted by default and Amazon Keyspaces enables you to back up your table data continuously using pointintime recovery Amazon Keyspaces gives you the performance elasticity and enterprise features you need to operate businesscritical Cassandra workloads at scale Amazon Neptune Amazon Neptune is a fast reliable fullymanaged graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built highperformance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency Amazon Neptune supports popular graph models Property Graph and W3C's RDF and their respective query languages Apache TinkerPop Gremlin and SPARQL allowing you to easily build queries that efficiently navigate highly connected datasets Neptune powers graph use cases such as recommendation engines fraud detection knowledge graphs drug discovery and network security 27Overview of Amazon Web Services AWS Whitepaper Amazon Relational Database Service Amazon Neptune is highly available with read replicas pointintime recovery continuous backup to Amazon S3 and replication across Availability Zones Neptune is secure with support for encryption at rest Neptune is fullymanaged so you no longer need to worry about database management tasks such as hardware provisioning software patching setup configuration or backups Amazon Relational Database Service Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a relational database in the cloud It provides costefficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on several database instance types optimized for memory performance or I/O and provides you with six familiar database engines to choose from including Amazon Aurora PostgreSQL MySQL MariaDB Oracle Database and SQL Server You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS Amazon RDS on VMware Amazon Relational Database Service (Amazon RDS) on VMware lets you deploy managed databases in onpremises VMware environments using the Amazon RDS technology enjoyed by hundreds of thousands of AWS customers Amazon RDS provides costefficient and resizable capacity while automating timeconsuming administration tasks including hardware provisioning database setup patching and backups freeing you to focus on your applications RDS on VMware brings these same benefits to your onpremises deployments making it easy to set up operate and scale databases in VMware vSphere private data centers or to migrate them to AWS Amazon RDS on VMware allows you to utilize the same simple interface for managing databases in onpremises VMware environments as you would use in AWS You can easily replicate RDS on VMware databases to RDS instances in AWS enabling lowcost hybrid deployments for disaster recovery read replica bursting and optional longterm backup retention in Amazon Simple Storage Service (Amazon S3) Amazon Quantum Ledger Database (QLDB) Amazon QLDB is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time Ledgers are typically used to record a history of economic and financial activity in an organization Many organizations build applications with ledgerlike functionality because they want to maintain an accurate history of their applications' data for example tracking the history of credits and debits in banking transactions verifying the data lineage of an insurance claim or tracing movement of an item in a supply chain network Ledger applications are often implemented using custom audit tables or audit trails created in relational databases However building audit functionality with relational databases is time consuming and prone to human error It requires custom development and since relational databases are not inherently immutable any unintended changes to the data are hard to track and verify Alternatively blockchain frameworks such as Hyperledger Fabric and Ethereum can also be used as a ledger However this adds complexity as you need to setup an entire blockchain network with multiple nodes manage its infrastructure and require the nodes to validate each transaction before it can be added to the ledger Amazon QLDB is a new class of database that eliminates the need to engage in the complex development effort of building your own ledgerlike applications With QLDB your data’s change history is immutable – it cannot be altered or deleted – and using cryptography you can easily verify 28Overview of Amazon Web Services AWS Whitepaper Amazon Timestream that there have been no unintended modifications to your application’s data QLDB uses an immutable transactional log known as a journal that tracks each application data change and maintains a complete and verifiable history of changes over time QLDB is easy to use because it provides developers with a familiar SQLlike API a flexible document data model and full support for transactions QLDB is also serverless so it automatically scales to support the demands of your application There are no servers to manage and no read or write limits to configure With QLDB you only pay for what you use Amazon Timestream Amazon Timestream is a fast scalable fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases Driven by the rise of IoT devices IT systems and smart industrial machines timeseries data — data that measures how things change over time — is one of the fastest growing data types Timeseries data has specific characteristics such as typically arriving in time order form data is appendonly and queries are always over a time interval While relational databases can store this data they are inefficient at processing this data as they lack optimizations such as storing and retrieving data by time intervals Timestream is a purposebuilt time series database that efficiently stores and processes this data by time intervals With Timestream you can easily store and analyze log data for DevOps sensor data for IoT applications and industrial telemetry data for equipment maintenance As your data grows over time Timestream’s adaptive query processing engine understands its location and format making your data simpler and faster to analyze Timestream also automates rollups retention tiering and compression of data so you can manage your data at the lowest possible cost Timestream is serverless so there are no servers to manage It manages timeconsuming tasks such as server provisioning software patching setup configuration or data retention and tiering freeing you to focus on building your applications Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB (with MongoDB compatibility) is a fast scalable highly available and fully managed document database service that supports MongoDB workloads Amazon DocumentDB (with MongoDB compatibility) is designed from the groundup to give you the performance scalability and availability you need when operating missioncritical MongoDB workloads at scale Amazon DocumentDB (with MongoDB compatibility) implements the Apache 20 open source MongoDB 36 and 40 APIs by emulating the responses that a MongoDB client expects from a MongoDB server allowing you to use your existing MongoDB drivers and tools with Amazon DocumentDB (with MongoDB compatibility) Developer Tools Amazon Corretto Amazon Corretto is a nocost multiplatform productionready distribution of the Open Java Development Kit (OpenJDK) Corretto comes with longterm support that will include performance enhancements and security fixes Amazon runs Corretto internally on thousands of production services and Corretto is certified as compatible with the Java SE standard With Corretto you can develop and run Java applications on popular operating systems including Amazon Linux 2 Windows and macOS AWS Cloud9 AWS Cloud9 is a cloudbased integrated development environment (IDE) that lets you write run and debug your code with just a browser It includes a code editor debugger and terminal Cloud9 comes prepackaged with essential tools for popular programming languages including JavaScript Python PHP 29Overview of Amazon Web Services AWS Whitepaper AWS CloudShell and more so you don’t need to install files or configure your development machine to start new projects Since your Cloud9 IDE is cloudbased you can work on your projects from your office home or anywhere using an internetconnected machine Cloud9 also provides a seamless experience for developing serverless applications enabling you to easily define resources debug and switch between local and remote execution of serverless applications With Cloud9 you can quickly share your development environment with your team enabling you to pair program and track each other's inputs in real time AWS CloudShell AWS CloudShell is a browserbased shell that makes it easy to securely manage explore and interact with your AWS resources CloudShell is preauthenticated with your console credentials Common development and operations tools are preinstalled so no local installation or configuration is required With CloudShell you can quickly run scripts with the AWS Command Line Interface (AWS CLI) experiment with AWS service APIs using the AWS SDKs or use a range of other tools to be productive You can use CloudShell right from your browser and at no additional cost AWS CodeArtifact AWS CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store publish and share software packages used in their software development process CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions CodeArtifact works with commonly used package managers and build tools like Maven Gradle npm yarn twine pip and NuGet making it easy to integrate into existing development workflows AWS CodeBuild AWS CodeBuild is a fully managed build service that compiles source code runs tests and produces software packages that are ready to deploy With CodeBuild you don’t need to provision manage and scale your own build servers CodeBuild scales continuously and processes multiple builds concurrently so your builds are not left waiting in a queue You can get started quickly by using prepackaged build environments or you can create custom build environments that use your own build tools AWS CodeCommit AWS CodeCommit is a fully managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories AWS CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure You can use AWS CodeCommit to securely store anything from source code to binaries and it works seamlessly with your existing Git tools AWS CodeDeploy AWS CodeDeploy is a service that automates code deployments to any instance including EC2 instances and instances running on premises CodeDeploy makes it easier for you to rapidly release new features helps you avoid downtime during application deployment and handles the complexity of updating your applications You can use CodeDeploy to automate software deployments eliminating the need for errorprone manual operations The service scales with your infrastructure so you can easily deploy to one instance or thousands AWS CodePipeline AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates CodePipeline automates the build 30Overview of Amazon Web Services AWS Whitepaper AWS CodeStar test and deploy phases of your release process every time there is a code change based on the release model you define This enables you to rapidly and reliably deliver features and updates You can easily integrate CodePipeline with thirdparty services such as GitHub or with your own custom plugin With AWS CodePipeline you only pay for what you use There are no upfront fees or longterm commitments AWS CodeStar AWS CodeStar enables you to quickly develop build and deploy applications on AWS AWS CodeStar provides a unified user interface enabling you to easily manage your software development activities in one place With AWS CodeStar you can set up your entire continuous delivery toolchain in minutes allowing you to start releasing code faster AWS CodeStar makes it easy for your whole team to work together securely allowing you to easily manage access and add owners contributors and viewers to your projects Each AWS CodeStar project comes with a project management dashboard including an integrated issue tracking capability powered by Atlassian JIRA Software With the AWS CodeStar project dashboard you can easily track progress across your entire software development process from your backlog of work items to teams’ recent code deployments For more information see AWS CodeStar features AWS Fault Injection Simulator AWS Fault Injection Simulator is a fully managed service for running fault injection experiments on AWS that makes it easier to improve an application’s performance observability and resiliency Fault injection experiments are used in chaos engineering which is the practice of stressing an application in testing or production environments by creating disruptive events such as sudden increase in CPU or memory consumption observing how the system responds and implementing improvements Fault injection experiment helps teams create the realworld conditions needed to uncover the hidden bugs monitoring blind spots and performance bottlenecks that are difficult to find in distributed systems Fault Injection Simulator simplifies the process of setting up and running controlled fault injection experiments across a range of AWS services so teams can build confidence in their application behavior With Fault Injection Simulator teams can quickly set up experiments using prebuilt templates that generate the desired disruptions Fault Injection Simulator provides the controls and guardrails that teams need to run experiments in production such as automatically rolling back or stopping the experiment if specific conditions are met With a few clicks in the console teams can run complex scenarios with common distributed system failures happening in parallel or building sequentially over time enabling them to create the real world conditions necessary to find hidden weaknesses AWS XRay AWS XRay helps developers analyze and debug distributed applications in production or under development such as those built using a microservices architecture With XRay you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors XRay provides an endtoend view of requests as they travel through your application and shows a map of your application’s underlying components You can use X Ray to analyze both applications in development and in production from simple threetier applications to complex microservices applications consisting of thousands of services End User Computing Topics •Amazon AppStream 20 (p 32) •Amazon WorkSpaces (p 32) 31Overview of Amazon Web Services AWS Whitepaper Amazon AppStream 20 •Amazon WorkLink (p 32) Amazon AppStream 20 Amazon AppStream 20 is a fully managed application streaming service You centrally manage your desktop applications on AppStream 20 and securely deliver them to any computer You can easily scale to any number of users across the globe without acquiring provisioning and operating hardware or infrastructure AppStream 20 is built on AWS so you benefit from a data center and network architecture designed for the most securitysensitive organizations Each user has a fluid and responsive experience with your applications including GPUintensive 3D design and engineering ones because your applications run on virtual machines (VMs) optimized for specific use cases and each streaming session automatically adjusts to network conditions Enterprises can use AppStream 20 to simplify application delivery and complete their migration to the cloud Educational institutions can provide every student access to the applications they need for class on any computer Software vendors can use AppStream 20 to deliver trials demos and training for their applications with no downloads or installations They can also develop a full softwareasaservice (SaaS) solution without rewriting their application Amazon WorkSpaces Amazon WorkSpaces is a fully managed secure cloud desktop service You can use WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe You can pay either monthly or hourly just for the WorkSpaces you launch which helps you save money when compared to traditional desktops and onpremises VDI solutions WorkSpaces helps you eliminate the complexity in managing hardware inventory OS versions and patches and Virtual Desktop Infrastructure (VDI) which helps simplify your desktop delivery strategy With WorkSpaces your users get a fast responsive desktop of their choice that they can access anywhere anytime from any supported device Amazon WorkLink Amazon WorkLink is a fully managed service that lets you provide your employees with secure easy access to your internal corporate websites and web apps using their mobile phones Traditional solutions such as Virtual Private Networks (VPNs) and device management software are inconvenient to use on the go and often require the use of custom browsers that have a poor user experience As a result employees often forgo using them altogether With Amazon WorkLink employees can access internal web content as easily as they access any public website without the hassle of connecting to their corporate network When a user accesses an internal website the page is first rendered in a browser running in a secure container in AWS Amazon WorkLink then sends the contents of that page to employee phones as vector graphics while preserving the functionality and interactivity of the page This approach is more secure than traditional solutions because internal content is never stored or cached by the browser on employee phones and employee devices never connect directly to your corporate network With Amazon WorkLink there are no minimum fees or longterm commitments You pay only for users that connect to the service each month and there is no additional charge for bandwidth consumption FrontEnd Web & Mobile Services Topics •Amazon Location Service (p 33) 32Overview of Amazon Web Services AWS Whitepaper Amazon Location Service •Amazon Pinpoint (p 33) •AWS Amplify (p 33) •AWS Device Farm (p 34) •AWS AppSync (p 34) Amazon Location Service Amazon Location Service makes it easy for developers to add location functionality to applications without compromising data security and user privacy Location data is a vital ingredient in today’s applications enabling capabilities ranging from asset tracking to locationbased marketing However developers face significant barriers when integrating location functionality into their applications This includes cost privacy and security compromises and tedious and slow integration work Amazon Location Service provides affordable data tracking and geofencing capabilities and native integrations with AWS services so you can create sophisticated locationenabled applications quickly without the high cost of custom development You retain control of your location data with Amazon Location and you can combine proprietary data with data from the service Amazon Location provides costeffective locationbased services (LBS) using highquality data from global trusted providers Esri and HERE Amazon Pinpoint Amazon Pinpoint makes it easy to send targeted messages to your customers through multiple engagement channels Examples of targeted campaigns are promotional alerts and customer retention campaigns and transactional messages are messages such as order confirmations and password reset messages You can integrate Amazon Pinpoint into your mobile and web apps to capture usage data to provide you with insight into how customers interact with your apps Amazon Pinpoint also tracks the ways that your customers respond to the messages you send—for example by showing you the number of messages that were delivered opened or clicked You can develop custom audience segments and send them prescheduled targeted campaigns via email SMS and push notifications Targeted campaigns are useful for sending promotional or educational content to reengage and retain your users You can send transactional messages using the console or the Amazon Pinpoint REST API Transactional campaigns can be sent via email SMS push notifications and voice messages You can also use the API to build custom applications that deliver campaign and transactional messages AWS Amplify AWS Amplify makes it easy to create configure and implement scalable mobile applications powered by AWS Amplify seamlessly provisions and manages your mobile backend and provides a simple framework to easily integrate your backend with your iOS Android Web and React Native frontends Amplify also automates the application release process of both your frontend and backend allowing you to deliver features faster Mobile applications require cloud services for actions that can’t be done directly on the device such as offline data synchronization storage or data sharing across multiple users You often have to configure set up and manage multiple services to power the backend You also have to integrate each of those services into your application by writing multiple lines of code However as the number of application 33Overview of Amazon Web Services AWS Whitepaper AWS Device Farm features grow your code and release process becomes more complex and managing the backend requires more time Amplify provisions and manages backends for your mobile applications You just select the capabilities you need such as authentication analytics or offline data sync and Amplify will automatically provision and manage the AWS service that powers each of the capabilities You can then integrate those capabilities into your application through the Amplify libraries and UI components AWS Device Farm AWS Device Farm is an app testing service that lets you test and interact with your Android iOS and web apps on many devices at once or reproduce issues on a device in real time View video screenshots logs and performance data to pinpoint and fix issues before shipping your app AWS AppSync AWS AppSync is a serverless backend for mobile web and enterprise applications AWS AppSync makes it easy to build data driven mobile and web applications by handling securely all the application data management tasks like online and offline data access data synchronization and data manipulation across multiple data sources AWS AppSync uses GraphQL an API query language designed to build client applications by providing an intuitive and flexible syntax for describing their data requirement Game Tech Topics •Amazon GameLift (p 34) •Amazon Lumberyard (p 34) Amazon GameLift Amazon GameLift is a managed service for deploying operating and scaling dedicated game servers for sessionbased multiplayer games Amazon GameLift makes it easy to manage server infrastructure scale capacity to lower latency and cost match players into available game sessions and defend from distributed denialofservice (DDoS) attacks You pay for the compute resources and bandwidth your games actually use without monthly or annual contracts Amazon Lumberyard Amazon Lumberyard is a free crossplatform 3D game engine for you to create the highestquality games connect your games to the vast compute and storage of the AWS Cloud and engage fans on Twitch By starting game projects with Lumberyard you can spend more of your time creating great gameplay and building communities of fans and less time on the undifferentiated heavy lifting of building a game engine and managing server infrastructure Internet of Things (IoT) Topics 34Overview of Amazon Web Services AWS Whitepaper AWS IoT 1Click •AWS IoT 1Click (p 35) •AWS IoT Analytics (p 35) •AWS IoT Button (p 36) •AWS IoT Core (p 36) •AWS IoT Device Defender (p 36) •AWS IoT Device Management (p 37) •AWS IoT Events (p 37) •AWS IoT Greengrass (p 37) •AWS IoT SiteWise (p 37) •AWS IoT Things Graph (p 38) •AWS Partner Device Catalog (p 38) •FreeRTOS (p 38) AWS IoT 1Click AWS IoT 1Click is a service that enables simple devices to trigger AWS Lambda functions that can execute an action AWS IoT 1Click supported devices enable you to easily perform actions such as notifying technical support tracking assets and replenishing goods or services AWS IoT 1Click supported devices are ready for use right out of the box and eliminate the need for writing your own firmware or configuring them for secure connectivity AWS IoT 1Click supported devices can be easily managed You can easily create device groups and associate them with a Lambda function that runs your desired action when triggered You can also track device health and activity with the prebuilt reports AWS IoT Analytics AWS IoT Analytics is a fullymanaged service that makes it easy to run and operationalize sophisticated analytics on massive volumes of IoT data without having to worry about the cost and complexity typically required to build an IoT analytics platform It is the easiest way to run analytics on IoT data and get insights to make better and more accurate decisions for IoT applications and machine learning use cases IoT data is highly unstructured which makes it difficult to analyze with traditional analytics and business intelligence tools that are designed to process structured data IoT data comes from devices that often record fairly noisy processes (such as temperature motion or sound) The data from these devices can frequently have significant gaps corrupted messages and false readings that must be cleaned up before analysis can occur Also IoT data is often only meaningful in the context of additional third party data inputs For example to help farmers determine when to water their crops vineyard irrigation systems often enrich moisture sensor data with rainfall data from the vineyard allowing for more efficient water usage while maximizing harvest yield AWS IoT Analytics automates each of the difficult steps that are required to analyze data from IoT devices AWS IoT Analytics filters transforms and enriches IoT data before storing it in a timeseries data store for analysis You can setup the service to collect only the data you need from your devices apply mathematical transforms to process the data and enrich the data with devicespecific metadata such as device type and location before storing the processed data Then you can analyze your data by running ad hoc or scheduled queries using the builtin SQL query engine or perform more complex analytics and machine learning inference AWS IoT Analytics makes it easy to get started with machine learning by including prebuilt models for common IoT use cases You can also use your own custom analysis packaged in a container to execute on AWS IoT Analytics AWS IoT Analytics automates the execution of your custom analyses created in Jupyter Notebook or your own tools (such as Matlab Octave etc) to be executed on your schedule 35Overview of Amazon Web Services AWS Whitepaper AWS IoT Button AWS IoT Analytics is a fully managed service that operationalizes analyses and scales automatically to support up to petabytes of IoT data With AWS IoT Analytics you can analyze data from millions of devices and build fast responsive IoT applications without managing hardware or infrastructure AWS IoT Button The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware This simple WiFi device is easy to configure and it’s designed for developers to get started with AWS IoT Core AWS Lambda Amazon DynamoDB Amazon SNS and many other Amazon Web Services without writing devicespecific code You can code the button's logic in the cloud to configure button clicks to count or track items call or alert someone start or stop something order services or even provide feedback For example you can click the button to unlock or start a car open your garage door call a cab call your spouse or a customer service representative track the use of common household chores medications or products or remotely control your home appliances The button can be used as a remote control for Netflix a switch for your Philips Hue light bulb a checkin/checkout device for Airbnb guests or a way to order your favorite pizza for delivery You can integrate it with thirdparty APIs like Twitter Facebook Twilio Slack or even your own company's applications Connect it to things we haven’t even thought of yet AWS IoT Core AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices AWS IoT Core can support billions of devices and trillions of messages and can process and route those messages to AWS endpoints and to other devices reliably and securely With AWS IoT Core your applications can keep track of and communicate with all your devices all the time even when they aren’t connected AWS IoT Core makes it easy to use AWS services like AWS Lambda Amazon Kinesis Amazon S3 Amazon SageMaker Amazon DynamoDB Amazon CloudWatch AWS CloudTrail and Amazon QuickSight to build Internet of Things (IoT) applications that gather process analyze and act on data generated by connected devices without having to manage any infrastructure AWS IoT Device Defender AWS IoT Device Defender is a fully managed service that helps you secure your fleet of IoT devices AWS IoT Device Defender continuously audits your IoT configurations to make sure that they aren’t deviating from security best practices A configuration is a set of technical controls you set to help keep information secure when devices are communicating with each other and the cloud AWS IoT Device Defender makes it easy to maintain and enforce IoT configurations such as ensuring device identity authenticating and authorizing devices and encrypting device data AWS IoT Device Defender continuously audits the IoT configurations on your devices against a set of predefined security best practices AWS IoT Device Defender sends an alert if there are any gaps in your IoT configuration that might create a security risk such as identity certificates being shared across multiple devices or a device with a revoked identity certificate trying to connect to AWS IoT Core AWS IoT Device Defender also lets you continuously monitor security metrics from devices and AWS IoT Core for deviations from what you have defined as appropriate behavior for each device If something doesn’t look right AWS IoT Device Defender sends out an alert so you can take action to remediate the issue For example traffic spikes in outbound traffic might indicate that a device is participating in a DDoS attack AWS IoT Greengrass and FreeRTOS automatically integrate with AWS IoT Device Defender to provide security metrics from the devices for evaluation AWS IoT Device Defender can send alerts to the AWS IoT Console Amazon CloudWatch and Amazon SNS If you determine that you need to take an action based on an alert you can use AWS IoT Device Management to take mitigating actions such as pushing security fixes 36Overview of Amazon Web Services AWS Whitepaper AWS IoT Device Management AWS IoT Device Management As many IoT deployments consist of hundreds of thousands to millions of devices it is essential to track monitor and manage connected device fleets You need to ensure your IoT devices work properly and securely after they have been deployed You also need to secure access to your devices monitor health detect and remotely troubleshoot problems and manage software and firmware updates AWS IoT Device Management makes it easy to securely onboard organize monitor and remotely manage IoT devices at scale With AWS IoT Device Management you can register your connected devices individually or in bulk and easily manage permissions so that devices remain secure You can also organize your devices monitor and troubleshoot device functionality query the state of any IoT device in your fleet and send firmware updates overtheair (OTA) AWS IoT Device Management is agnostic to device type and OS so you can manage devices from constrained microcontrollers to connected cars all with the same service AWS IoT Device Management allows you to scale your fleets and reduce the cost and effort of managing large and diverse IoT device deployments AWS IoT Events AWS IoT Events is a fully managed IoT service that makes it easy to detect and respond to events from IoT sensors and applications Events are patterns of data identifying more complicated circumstances than expected such as changes in equipment when a belt is stuck or connected motion detectors using movement signals to activate lights and security cameras To detect events before AWS IoT Events you had to build costly custom applications to collect data apply decision logic to detect an event and then trigger another application to react to the event Using AWS IoT Events it’s simple to detect events across thousands of IoT sensors sending different telemetry data such as temperature from a freezer humidity from respiratory equipment and belt speed on a motor and hundreds of equipment management applications You simply select the relevant data sources to ingest define the logic for each event using simple ‘ifthenelse’ statements and select the alert or custom action to trigger when an event occurs AWS IoT Events continuously monitors data from multiple IoT sensors and applications and it integrates with other services such as AWS IoT Core and AWS IoT Analytics to enable early detection and unique insights into events AWS IoT Events automatically triggers alerts and actions in response to events based on the logic you define This helps resolve issues quickly reduce maintenance costs and increase operational efficiency AWS IoT Greengrass AWS IoT Greengrass seamlessly extends AWS to devices so they can act locally on the data they generate while still using the cloud for management analytics and durable storage With AWS IoT Greengrass connected devices can run AWS Lambda functions execute predictions based on machine learning models keep device data in sync and communicate with other devices securely – even when not connected to the Internet With AWS IoT Greengrass you can use familiar languages and programming models to create and test your device software in the cloud and then deploy it to your devices AWS IoT Greengrass can be programmed to filter device data and only transmit necessary information back to the cloud You can also connect to thirdparty applications onpremises software and AWS services outofthebox with AWS IoT Greengrass Connectors Connectors also jumpstart device onboarding with prebuilt protocol adapter integrations and allow you to streamline authentication via integration with AWS Secrets Manager AWS IoT SiteWise AWS IoT SiteWise is a managed service that makes it easy to collect store organize and monitor data from industrial equipment at scale to help you make better datadriven decisions You can use AWS IoT SiteWise to monitor operations across facilities quickly compute common industrial performance 37Overview of Amazon Web Services AWS Whitepaper AWS IoT Things Graph metrics and create applications that analyze industrial equipment data to prevent costly equipment issues and reduce gaps in production This allows you to collect data consistently across devices identify issues with remote monitoring more quickly and improve multisite processes with centralized data Today getting performance metrics from industrial equipment is challenging because data is often locked into proprietary onpremises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data This gateway securely connects to your onpremises data servers collects data and sends the data to the AWS Cloud AWS IoT SiteWise also provides interfaces for collecting data from modern industrial applications through MQTT messages or APIs You can use AWS IoT SiteWise to model your physical assets processes and facilities quickly compute common industrial performance metrics and create fully managed web applications to help analyze industrial equipment data reduce costs and make faster decisions With AWS IoT SiteWise you can focus on understanding and optimizing your operations rather than building costly inhouse data collection and management applications AWS IoT Things Graph AWS IoT Things Graph is a service that makes it easy to visually connect different devices and web services to build IoT applications IoT applications are being built today using a variety of devices and web services to automate tasks for a wide range of use cases such as smart homes industrial automation and energy management Because there aren't any widely adopted standards it's difficult today for developers to get devices from multiple manufacturers to connect to each other as well as with web services This forces developers to write lots of code to wire together all of the devices and web services they need for their IoT application AWS IoT Things Graph provides a visual draganddrop interface for connecting and coordinating devices and web services so you can build IoT applications quickly For example in a commercial agriculture application you can define interactions between humidity temperature and sprinkler sensors with weather data services in the cloud to automate watering You represent devices and services using prebuilt reusable components called models that hide lowlevel details such as protocols and interfaces and are easy to integrate to create sophisticated workflows You can get started with AWS IoT Things Graph using these prebuilt models for popular device types such as switches and programmable logic controllers (PLCs) or create your own custom model using a GraphQLbased schema modeling language and deploy your IoT application to AWS IoT Greengrass enabled devices such as cameras cable settop boxes or robotic arms in just a few clicks IoT Greengrass is software that provides local compute and secure cloud connectivity so devices can respond quickly to local events even without internet connectivity and runs on a huge range of devices from a Raspberry Pi to a serverlevel appliance IoT Things Graph applications run on IoT Greengrassenabled devices AWS Partner Device Catalog The AWS Partner Device Catalog helps you find devices and hardware to help you explore build and go to market with your IoT solutions Search for and find hardware that works with AWS including development kits and embedded systems to build new devices as well as offtheshelfdevices such as gateways edge servers sensors and cameras for immediate IoT project integration The choice of AWS enabled hardware from our curated catalog of devices from APN partners can help make the rollout of your IoT projects easier All devices listed in the AWS Partner Device Catalog are also available for purchase from our partners to get you started quickly FreeRTOS FreeRTOS is an operating system for microcontrollers that makes small lowpower edge devices easy to program deploy secure connect and manage FreeRTOS extends the FreeRTOS kernel a popular 38Overview of Amazon Web Services AWS Whitepaper Machine Learning open source operating system for microcontrollers with software libraries that make it easy to securely connect your small lowpower devices to AWS cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass A microcontroller (MCU) is a single chip containing a simple processor that can be found in many devices including appliances sensors fitness trackers industrial automation and automobiles Many of these small devices could benefit from connecting to the cloud or locally to other devices For example smart electricity meters need to connect to the cloud to report on usage and building security systems need to communicate locally so that a door will unlock when you badge in Microcontrollers have limited compute power and memory capacity and typically perform simple functional tasks Microcontrollers frequently run operating systems that do not have builtin functionality to connect to local networks or the cloud making IoT applications a challenge FreeRTOS helps solve this problem by providing both the core operating system (to run the edge device) as well as software libraries that make it easy to securely connect to the cloud (or other edge devices) so you can collect data from them for IoT applications and take action Machine Learning Topics •Amazon Augmented AI (p 40) •Amazon CodeGuru (p 40) •Amazon Comprehend (p 40) •Amazon DevOps Guru (p 40) •Amazon Elastic Inference (p 41) •Amazon Forecast (p 41) •Amazon Fraud Detector (p 42) •Amazon HealthLake (p 42) •Amazon Kendra (p 42) •Amazon Lex (p 42) •Amazon Lookout for Equipment (p 43) •Amazon Lookout for Metrics (p 43) •Amazon Lookout for Vision (p 43) •Amazon Monitron (p 43) •Amazon Personalize (p 44) •Amazon Polly (p 44) •Amazon Rekognition (p 44) •Amazon SageMaker (p 45) •Amazon SageMaker Ground Truth (p 45) •Amazon Textract (p 46) •Amazon Transcribe (p 46) •Amazon Translate (p 46) •Apache MXNet on AWS (p 46) •AWS Deep Learning AMIs (p 47) •AWS DeepComposer (p 47) •AWS DeepLens (p 47) •AWS DeepRacer (p 47) •AWS Inferentia (p 47) •TensorFlow on AWS (p 48) 39Overview of Amazon Web Services AWS Whitepaper Amazon Augmented AI Amazon Augmented AI Amazon Augmented AI (Amazon A2I) is a machine learning service which makes it easy to build the workflows required for human review Amazon A2I brings human review to all developers removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers whether it runs on AWS or not Amazon CodeGuru Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application's performance in production and provide recommendations and visual clues on how to improve code quality application performance and reduce overall cost CodeGuru Reviewer uses machine learning and automated reasoning to identify critical issues security vulnerabilities and hardtofind bugs during application development and provides recommendations to improve code quality CodeGuru Profiler helps developers find an application’s most expensive lines of code by helping them understand the runtime behavior of their applications identify and remove code inefficiencies improve performance and significantly decrease compute costs Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text No machine learning experience required There is a treasure trove of potential sitting in your unstructured data Customer emails support tickets product reviews social media even advertising copy represents insights into customer sentiment that can be put to work for your business The question is how to get at it? As it turns out Machine learning is particularly good at accurately identifying specific items of interest inside vast swathes of text (such as finding company names in analyst reports) and can learn the sentiment hidden inside language (identifying negative reviews or positive customer interactions with customer service agents) at almost limitless scale Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data The service identifies the language of the text; extracts key phrases places people brands or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic You can also use AutoML capabilities in Amazon Comprehend to build a custom set of entities or text classification models that are tailored uniquely to your organization’s needs For extracting complex medical information from unstructured text you can use Amazon Comprehend Medical The service can identify medical information such as medical conditions medications dosages strengths and frequencies from a variety of sources like doctor’s notes clinical trial reports and patient health records Amazon Comprehend Medical also identifies the relationship among the extracted medication and test treatment and procedure information for easier analysis For example the service identifies a particular dosage strength and frequency related to a specific medication from unstructured clinical notes Amazon DevOps Guru Amazon DevOps Guru is a Machine Learning (ML) powered service that makes it easy to improve an application’s operational performance and availability DevOps Guru detects behaviors that deviate from normal operating patterns so you can identify operational issues long before they impact your customers 40Overview of Amazon Web Services AWS Whitepaper Amazon Elastic Inference DevOps Guru uses machine learning models informed by years of Amazoncom and AWS operational excellence to identify anomalous application behavior (eg increased latency error rates resource constraints etc) and surface critical issues that could cause potential outages or service disruptions When DevOps Guru identifies a critical issue it automatically sends an alert and provides a summary of related anomalies the likely root cause and context about when and where the issue occurred When possible DevOps Guru also provides recommendations on how to remediate the issue DevOps Guru automatically ingests operational data from your AWS applications and provides a single dashboard to visualize issues in your operational data You can get started with DevOps Guru by selecting coverage from your CloudFormation stacks or your AWS account to improve application availability and reliability with no manual setup or machine learning expertise Amazon Elastic Inference Amazon Elastic Inference allows you to attach lowcost GPUpowered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75% Amazon Elastic Inference supports TensorFlow Apache MXNet PyTorch and ONNX models In most deep learning applications making predictions using a trained model—a process called inference —can drive as much as 90% of the compute costs of the application due to two factors First standalone GPU instances are designed for model training and are typically oversized for inference While training jobs batch process hundreds of data samples in parallel most inference happens on a single input in real time that consumes only a small amount of GPU compute Even at peak load a GPU's compute capacity may not be fully utilized which is wasteful and costly Second different models need different amounts of GPU CPU and memory resources Selecting a GPU instance type that is big enough to satisfy the requirements of the least used resource often results in underutilization of the other resources and high costs Amazon Elastic Inference solves these problems by allowing you to attach just the right amount of GPUpowered inference acceleration to any EC2 or SageMaker instance type with no code changes With Amazon Elastic Inference you can now choose the instance type that is best suited to the overall CPU and memory needs of your application and then separately configure the amount of inference acceleration that you need to use resources efficiently and to reduce the cost of running inference Amazon Forecast Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts Companies today use everything from simple spreadsheets to complex financial planning software to attempt to accurately forecast future business outcomes such as product demand resource needs or financial performance These tools build forecasts by looking at a historical series of data which is called time series data For example such tools may try to predict the future sales of a raincoat by looking only at its previous sales data with the underlying assumption that the future is determined by the past This approach can struggle to produce accurate forecasts for large sets of data that have irregular trends Also it fails to easily combine data series that change over time (such as price discounts web traffic and number of employees) with relevant independent variables like product features and store locations Based on the same technology used at Amazoncom Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts Amazon Forecast requires no machine learning experience to get started You only need to provide historical data plus any additional data that you believe may impact your forecasts For example the demand for a particular color of a shirt may change with the seasons and store location This complex relationship is hard to determine on its own but machine learning is ideally suited to recognize it Once you provide your data Amazon Forecast will automatically examine it identify what is meaningful and produce a forecasting model capable of making predictions that are up to 50% more accurate than looking at time series data alone 41Overview of Amazon Web Services AWS Whitepaper Amazon Fraud Detector Amazon Forecast is a fully managed service so there are no servers to provision and no machine learning models to build train or deploy You pay only for what you use and there are no minimum fees and no upfront commitments Amazon Fraud Detector Amazon Fraud Detector is a fully managed service that uses machine learning (ML) and more than 20 years of fraud detection expertise from Amazon to identify potentially fraudulent activity so customers can catch more online fraud faster Amazon Fraud Detector automates the time consuming and expensive steps to build train and deploy an ML model for fraud detection making it easier for customers to leverage the technology Amazon Fraud Detector customizes each model it creates to a customer’s own dataset making the accuracy of models higher than current onesize fits all ML solutions And because you pay only for what you use you avoid large upfront expenses Amazon HealthLake Amazon HealthLake is a HIPAAeligible service that healthcare providers health insurance companies and pharmaceutical companies can use to store transform query and analyze largescale health data Health data is frequently incomplete and inconsistent It's also often unstructured with information contained in clinical notes lab reports insurance claims medical images recorded conversations and timeseries data (for example heart ECG or brain EEG traces) Healthcare providers can use HealthLake to store transform query and analyze data in the AWS Cloud Using the HealthLake integrated medical natural language processing (NLP) capabilities you can analyze unstructured clinical text from diverse sources HealthLake transforms unstructured data using natural language processing models and provides powerful query and search capabilities You can use HealthLake to organize index and structure patient information in a secure compliant and auditable manner Amazon Kendra Amazon Kendra is an intelligent search service powered by machine learning Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for even when it’s scattered across multiple locations and content repositories within your organization Using Amazon Kendra you can stop searching through troves of unstructured data and discover the right answers to your questions when you need them Amazon Kendra is a fully managed service so there are no servers to provision and no machine learning models to build train or deploy Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using voice and text Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging user experiences and lifelike conversational interactions With Amazon Lex the same deep learning technologies that power Amazon Alexa are now available to any developer enabling you to quickly and easily build sophisticated natural language conversational bots (“chatbots”) Speech recognition and natural language understanding are some of the most challenging problems to solve in computer science requiring sophisticated deep learning algorithms to be trained on massive amounts of data and infrastructure Amazon Lex democratizes these deep learning technologies by putting the power of Alexa within reach of all developers Harnessing these technologies Amazon Lex enables you to define entirely new categories of products made possible through conversational interfaces 42Overview of Amazon Web Services AWS Whitepaper Amazon Lookout for Equipment Amazon Lookout for Equipment Amazon Lookout for Equipment analyzes the data from the sensors on your equipment (eg pressure in a generator flow rate of a compressor revolutions per minute of fans) to automatically train a machine learning model based on just your data for your equipment – with no ML expertise required Lookout for Equipment uses your unique ML model to analyze incoming sensor data in realtime and accurately identify early warning signs that could lead to machine failures This means you can detect equipment abnormalities with speed and precision quickly diagnose issues take action to reduce expensive downtime and reduce false alerts Amazon Lookout for Metrics Amazon Lookout for Metrics uses machine learning (ML) to automatically detect and diagnose anomalies (ie outliers from the norm) in business and operational data such as a sudden dip in sales revenue or customer acquisition rates In a couple of clicks you can connect Amazon Lookout for Metrics to popular data stores like Amazon S3 Amazon Redshift and Amazon Relational Database Service (RDS) as well as thirdparty SaaS applications such as Salesforce Servicenow Zendesk and Marketo and start monitoring metrics that are important to your business Amazon Lookout for Metrics automatically inspects and prepares the data from these sources to detect anomalies with greater speed and accuracy than traditional methods used for anomaly detection You can also provide feedback on detected anomalies to tune the results and improve accuracy over time Amazon Lookout for Metrics makes it easy to diagnose detected anomalies by grouping together anomalies that are related to the same event and sending an alert that includes a summary of the potential root cause It also ranks anomalies in order of severity so that you can prioritize your attention to what matters the most to your business Amazon Lookout for Vision Amazon Lookout for Vision is a machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV) With Amazon Lookout for Vision manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale For example Amazon Lookout for Vision can be used to identify missing components in products damage to vehicles or structures irregularities in production lines miniscule defects in silicon wafers and other similar problems Amazon Lookout for Vision uses ML to see and understand images from any camera as a person would but with an even higher degree of accuracy and at a much larger scale Amazon Lookout for Vision allows customers to eliminate the need for costly and inconsistent manual inspection while improving quality control defect and damage assessment and compliance In minutes you can begin using Amazon Lookout for Vision to automate inspection of images and objects– with no machine learning expertise required Amazon Monitron Amazon Monitron is an endtoend system that uses machine learning (ML) to detect abnormal behavior in industrial machinery enabling you to implement predictive maintenance and reduce unplanned downtime Installing sensors and the necessary infrastructure for data connectivity storage analytics and alerting are foundational elements for enabling predictive maintenance However in order to make it work companies have historically needed skilled technicians and data scientists to piece together a complex solution from scratch This included identifying and procuring the right type of sensors for their use cases and connecting them together with an IoT gateway (a device that aggregates and transmits data) As a result few companies have been able to successfully implement predictive maintenance Amazon Monitron includes sensors to capture vibration and temperature data from equipment a gateway device to securely transfer data to AWS the Amazon Monitron service that analyzes the data for abnormal machine patterns using machine learning and a companion mobile app to set up the devices 43Overview of Amazon Web Services AWS Whitepaper Amazon Personalize and receive reports on operating behavior and alerts to potential failures in your machinery You can start monitoring equipment health in minutes without any development work or ML experience required and enable predictive maintenance with the same technology used to monitor equipment in Amazon Fulfillment Centers Amazon Personalize Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications Machine learning is being increasingly used to improve customer engagement by powering personalized product and content recommendations tailored search results and targeted marketing promotions However developing the machinelearning capabilities necessary to produce these sophisticated recommendation systems has been beyond the reach of most organizations today due to the complexity of developing machine learning functionality Amazon Personalize allows developers with no prior machine learning experience to easily build sophisticated personalization capabilities into their applications using machine learning technology perfected from years of use on Amazoncom With Amazon Personalize you provide an activity stream from your application – page views signups purchases and so forth – as well as an inventory of the items you want to recommend such as articles products videos or music You can also choose to provide Amazon Personalize with additional demographic information from your users such as age or geographic location Amazon Personalize will process and examine the data identify what is meaningful select the right algorithms and train and optimize a personalization model that is customized for your data All data analyzed by Amazon Personalize is kept private and secure and only used for your customized recommendations You can start serving your personalized predictions via a simple API call from inside the virtual private cloud that the service maintains You pay only for what you use and there are no minimum fees and no upfront commitments Amazon Personalize is like having your own Amazoncom machine learning personalization team at your disposal 24 hours a day Amazon Polly Amazon Polly is a service that turns text into lifelike speech Polly lets you create applications that talk enabling you to build entirely new categories of speechenabled products Polly is an Amazon artificial intelligence (AI) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice Polly includes a wide selection of lifelike voices spread across dozens of languages so you can select the ideal voice and build speechenabled applications that work in many different countries Amazon Polly delivers the consistently fast response times required to support realtime interactive dialog You can cache and save Polly’s speech audio to replay offline or redistribute And Polly is easy to use You simply send the text you want converted into speech to the Polly API and Polly immediately returns the audio stream to your application so your application can play it directly or store it in a standard audio file format such as MP3 With Polly you only pay for the number of characters you convert to speech and you can save and replay Polly’s generated speech Polly’s low cost per character converted and lack of restrictions on storage and reuse of voice output make it a costeffective way to enable TexttoSpeech everywhere Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to your applications using proven highly scalable deep learning technology that requires no machine learning expertise to use With 44Overview of Amazon Web Services AWS Whitepaper Amazon SageMaker Amazon Rekognition you can identify objects people text scenes and activities in images and videos as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect analyze and compare faces for a wide variety of user verification people counting and public safety use cases With Amazon Rekognition Custom Labels you can identify the objects and scenes in images that are specific to your business needs For example you can build a model to classify specific machine parts on your assembly line or to detect unhealthy plants Amazon Rekognition Custom Labels takes care of the heavy lifting of model development for you so no machine learning experience is required You simply need to supply images of objects or scenes you want to identify and the service handles the rest Amazon SageMaker Amazon SageMaker is a fullymanaged service that enables developers and data scientists to quickly and easily build train and deploy machine learning models at any scale SageMaker removes all the barriers that typically slow down developers who want to use machine learning Machine learning often feels a lot harder than it should be to most developers because the process to build and train models and then deploy them into production is too complicated and too slow First you need to collect and prepare your training data to discover which elements of your data set are important Then you need to select which algorithm and framework you’ll use After deciding on your approach you need to teach the model how to make predictions by training which requires a lot of compute Then you need to tune the model so it delivers the best possible predictions which is often a tedious and manual effort After you’ve developed a fully trained model you need to integrate the model with your application and deploy this application on infrastructure that will scale All of this takes a lot of specialized expertise access to large amounts of compute and storage and a lot of time to experiment and optimize every part of the process In the end it's not a surprise that the whole thing feels out of reach for most developers SageMaker removes the complexity that holds back developer success with each of these steps SageMaker includes modules that can be used together or independently to build train and deploy your machine learning models Amazon SageMaker Ground Truth Amazon SageMaker Ground Truth helps you build highly accurate training datasets for machine learning quickly SageMaker Ground Truth offers easy access to public and private human labelers and provides them with builtin workflows and interfaces for common labeling tasks Additionally SageMaker Ground Truth can lower your labeling costs by up to 70% using automatic labeling which works by training Ground Truth from data labeled by humans so that the service learns to label data independently Successful machine learning models are built on the shoulders of large volumes of highquality training data But the process to create the training data necessary to build these models is often expensive complicated and timeconsuming The majority of models created today require a human to manually label data in a way that allows the model to learn how to make correct decisions For example building a computer vision system that is reliable enough to identify objects such as traffic lights stop signs and pedestrians requires thousands of hours of video recordings that consist of hundreds of millions of video frames Each one of these frames needs all of the important elements like the road other cars and signage to be labeled by a human before any work can begin on the model you want to develop Amazon SageMaker Ground Truth significantly reduces the time and effort required to create datasets for training to reduce costs These savings are achieved by using machine learning to automatically label data The model is able to get progressively better over time by continuously learning from labels created by human labelers Where the labeling model has high confidence in its results based on what it has learned so far it will automatically apply labels to the raw data Where the labeling model has lower confidence in its results 45Overview of Amazon Web Services AWS Whitepaper Amazon Textract it will pass the data to humans to do the labeling The humangenerated labels are provided back to the labeling model for it to learn from and improve Over time SageMaker Ground Truth can label more and more data automatically and substantially speed up the creation of training datasets Amazon Textract Amazon Textract is a service that automatically extracts text and data from scanned documents Amazon Textract goes beyond simple optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables Many companies today extract data from documents and forms through manual data entry that’s slow and expensive or through simple optical character recognition (OCR) software that is difficult to customize Rules and workflows for each document and form often need to be hardcoded and updated with each change to the form or when dealing with multiple forms If the form deviates from the rules the output is often scrambled and unusable Amazon Textract overcomes these challenges by using machine learning to instantly “read” virtually any type of document to accurately extract text and data without the need for any manual effort or custom code With Textract you can quickly automate document workflows enabling you to process millions of document pages in hours Once the information is captured you can take action on it within your business applications to initiate next steps for a loan application or medical claims processing Additionally you can create smart search indexes build automated approval workflows and better maintain compliance with document archival rules by flagging data that may require redaction Amazon Transcribe Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speechtotext capability to their applications Using the Amazon Transcribe API you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech You can also send a live audio stream to Amazon Transcribe and receive a stream of transcripts in real time Amazon Transcribe can be used for lots of common applications including the transcription of customer service calls and generating subtitles on audio and video content The service can transcribe audio files stored in common formats like WAV and MP3 with time stamps for every word so that you can easily locate the audio in the original source by searching for the text Amazon Transcribe is continually learning and improving to keep pace with the evolution of language Amazon Translate Amazon Translate is a neural machine translation service that delivers fast highquality and affordable language translation Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rulebased translation algorithms Amazon Translate allows you to localize content such as websites and applications for international users and to easily translate large volumes of text efficiently Apache MXNet on AWS Apache MXNet on AWS is a fast and scalable training and inference framework with an easytouse concise API for machine learning MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud on edge devices and on mobile apps In just a few lines of Gluon code you can build linear regression convolutional networks and recurrent LSTMs for object detection speech recognition recommendation and personalization 46Overview of Amazon Web Services AWS Whitepaper AWS Deep Learning AMIs You can get started with MxNet on AWS with a fullymanaged experience using SageMaker a platform to build train and deploy machine learning models at scale Or you can use the AWS Deep Learning AMIs to build custom environments and workflows with MxNet as well as other frameworks including TensorFlow PyTorch Chainer Keras Caffe Caffe2 and Microsoft Cognitive Toolkit AWS Deep Learning AMIs The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud at any scale You can quickly launch Amazon EC2 instances preinstalled with popular deep learning frameworks such as Apache MXNet and Gluon TensorFlow Microsoft Cognitive Toolkit Caffe Caffe2 Theano Torch PyTorch Chainer and Keras to train sophisticated custom AI models experiment with new algorithms or to learn new skills and techniques AWS DeepComposer AWS DeepComposer is the world’s first musical keyboard powered by machine learning to enable developers of all skill levels to learn Generative AI while creating original music outputs DeepComposer consists of a USB keyboard that connects to the developer’s computer and the DeepComposer service accessed through the AWS Management Console DeepComposer includes tutorials sample code and training data that can be used to start building generative models AWS DeepLens AWS DeepLens helps put deep learning in the hands of developers literally with a fully programmable video camera tutorials code and pretrained models designed to expand deep learning skills AWS DeepRacer AWS DeepRacer is a 1/18th scale race car which gives you an interesting and fun way to get started with reinforcement learning (RL) RL is an advanced machine learning (ML) technique which takes a very different approach to training models than other machine learning methods Its super power is that it learns very complex behaviors without requiring any labeled training data and can make short term decisions while optimizing for a longer term goal With AWS DeepRacer you now have a way to get handson with RL experiment and learn through autonomous driving You can get started with the virtual car and tracks in the cloudbased 3D racing simulator and for a realworld experience you can deploy your trained models onto AWS DeepRacer and race your friends or take part in the global AWS DeepRacer League Developers the race is on AWS Inferentia AWS Inferentia is a machine learning inference chip designed to deliver high performance at low cost AWS Inferentia will support the TensorFlow Apache MXNet and PyTorch deep learning frameworks as well as models that use the ONNX format Making predictions using a trained machine learning model–a process called inference–can drive as much as 90% of the compute costs of the application Using Amazon Elastic Inference developers can reduce inference costs by up to 75% by attaching GPUpowered inference acceleration to Amazon EC2 and SageMaker instances However some inference workloads require an entire GPU or have extremely low latency requirements Solving this challenge at low cost requires a dedicated inference chip AWS Inferentia provides high throughput low latency inference performance at an extremely low cost Each chip provides hundreds of TOPS (tera operations per second) of inference throughput to allow complex models to make fast predictions For even more performance multiple AWS Inferentia chips can 47Overview of Amazon Web Services AWS Whitepaper TensorFlow on AWS be used together to drive thousands of TOPS of throughput AWS Inferentia will be available for use with SageMaker Amazon EC2 and Amazon Elastic Inference TensorFlow on AWS TensorFlow enables developers to quickly and easily get started with deep learning in the cloud The framework has broad support in the industry and has become a popular choice for deep learning research and application development particularly in areas such as computer vision natural language understanding and speech translation You can get started on AWS with a fullymanaged TensorFlow experience with SageMaker a platform to build train and deploy machine learning models at scale Or you can use the AWS Deep Learning AMIs to build custom environments and workflows with TensorFlow and other popular frameworks including Apache MXNet PyTorch Caffe Caffe2 Chainer Gluon Keras and Microsoft Cognitive Toolkit Management and Governance Topics •Amazon CloudWatch (p 48) •AWS Auto Scaling (p 49) •AWS Chatbot (p 49) •AWS Compute Optimizer (p 49) •AWS Control Tower (p 49) •AWS CloudFormation (p 50) •AWS CloudTrail (p 50) •AWS Config (p 50) •AWS Launch Wizard (p 51) •AWS Organizations (p 51) •AWS OpsWorks (p 51) •AWS Proton (p 51) •AWS Service Catalog (p 51) •AWS Systems Manager (p 52) •AWS Trusted Advisor (p 53) •AWS Personal Health Dashboard (p 53) •AWS Managed Services (p 53) •AWS Console Mobile Application (p 53) •AWS License Manager (p 54) •AWS WellArchitected Tool (p 54) Amazon CloudWatch Amazon CloudWatch is a monitoring and management service built for developers system operators site reliability engineers (SRE) and IT managers CloudWatch provides you with data and actionable insights to monitor your applications understand and respond to systemwide performance changes optimize resource utilization and get a unified view of operational health CloudWatch collects monitoring and operational data in the form of logs metrics and events providing you with a unified view of AWS resources applications and services that run on AWS and onpremises servers You can use CloudWatch to set high resolution alarms visualize logs and metrics side by side take automated 48Overview of Amazon Web Services AWS Whitepaper AWS Auto Scaling actions troubleshoot issues and discover insights to optimize your applications and ensure they are running smoothly AWS Auto Scaling AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady predictable performance at the lowest possible cost Using AWS Auto Scaling it’s easy to setup application scaling for multiple resources across multiple services in minutes The service provides a simple powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets Amazon ECS tasks Amazon DynamoDB tables and indexes and Amazon Aurora Replicas AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance costs or balance between them If you’re already using Amazon EC2 Auto Scaling to dynamically scale your Amazon EC2 instances you can now combine it with AWS Auto Scaling to scale additional resources for other AWS services With AWS Auto Scaling your applications always have the right resources at the right time AWS Chatbot AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources in your Slack channels and Amazon Chime chat rooms With AWS Chatbot you can receive alerts run commands to return diagnostic information invoke AWS Lambda functions and create AWS support cases AWS Chatbot manages the integration between AWS services and your Slack channels or Amazon Chime chat rooms helping you to get started with ChatOps fast With just a few clicks you can start receiving notifications and issuing commands in your chosen channels or chat rooms so your team doesn’t have to switch contexts to collaborate AWS Chatbot makes it easier for your team to stay updated collaborate and respond faster to operational events security findings CI/CD workflows budget and other alerts for applications running in your AWS accounts AWS Compute Optimizer AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics Over provisioning resources can lead to unnecessary infrastructure cost and underprovisioning resources can lead to poor application performance Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances Amazon EBS volumes and AWS Lambda functions based on your utilization data By applying the knowledge drawn from Amazon’s own experience running diverse workloads in the cloud Compute Optimizer identifies workload patterns and recommends optimal AWS resources Compute Optimizer analyzes the configuration and resource utilization of your workload to identify dozens of defining characteristics for example if a workload is CPUintensive if it exhibits a daily pattern or if a workload accesses local storage frequently The service processes these characteristics and identifies the hardware resource required by the workload Compute Optimizer infers how the workload would have performed on various hardware platforms (eg Amazon EC2 instances types) or using different configurations (eg Amazon EBS volume IOPS settings and AWS Lambda function memory sizes) to offer recommendations Compute Optimizer is available to you at no additional charge To get started you can opt in to the service in the AWS Compute Optimizer Console AWS Control Tower AWS Control Tower automates the setup of a baseline environment or landing zone that is a secure wellarchitected multiaccount AWS environment The configuration of the landing zone is based on 49Overview of Amazon Web Services AWS Whitepaper AWS CloudFormation best practices that have been established by working with thousands of enterprise customers to create a secure environment that makes it easier to govern AWS workloads with rules for security operations and compliance As enterprises migrate to AWS they typically have a large number of applications and distributed teams They often want to create multiple accounts to allow their teams to work independently while still maintaining a consistent level of security and compliance In addition they use AWS’s management and security services like AWS Organizations AWS Service Catalog and AWS Config that provide very granular controls over their workloads They want to maintain this control but they also want a way to centrally govern and enforce the best use of AWS services across all the accounts in their environment Control Tower automates the setup of their landing zone and configures AWS management and security services based on established best practices in a secure compliant multiaccount environment Distributed teams are able to provision new AWS accounts quickly while central teams have the peace of mind knowing that new accounts are aligned with centrally established companywide compliance policies This gives you control over your environment without sacrificing the speed and agility AWS provides your development teams AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion You can use the AWS CloudFormation sample templates or create your own templates to describe your AWS resources and any associated dependencies or runtime parameters required to run your application You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work CloudFormation takes care of this for you After the AWS resources are deployed you can modify and update them in a controlled and predictable way in effect applying version control to your AWS infrastructure the same way you do with your software You can also visualize your templates as diagrams and edit them using a draganddrop interface with the AWS CloudFormation Designer AWS CloudTrail AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service With CloudTrail you can get a history of AWS API calls for your account including API calls made using the AWS Management Console AWS SDKs command line tools and higherlevel AWS services (such as AWS CloudFormation (p 50)) The AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing AWS Config AWS Config is a fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance The Config Rules feature enables you to create rules that automatically check the configuration of AWS resources recorded by AWS Config With AWS Config you can discover existing and deleted AWS resources determine your overall compliance against rules and dive into configuration details of a resource at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting 50Overview of Amazon Web Services AWS Whitepaper AWS Launch Wizard AWS Launch Wizard AWS Launch Wizard offers a guided way of sizing configuring and deploying AWS resources for third party applications such as Microsoft SQL Server Always On and HANA based SAP systems without the need to manually identify and provision individual AWS resources To start you input your application requirements including performance number of nodes and connectivity on the service console Launch Wizard then identifies the right AWS resources such as EC2 instances and EBS volumes to deploy and run your application Launch Wizard provides an estimated cost of deployment and lets you modify your resources to instantly view an updated cost assessment Once you approve the AWS resources Launch Wizard automatically provisions and configures the selected resources to create a fullyfunctioning productionready application AWS Launch Wizard also creates CloudFormation templates that can serve as a baseline to accelerate subsequent deployments Launch Wizard is available to you at no additional charge You only pay for the AWS resources that are provisioned for running your solution AWS Organizations AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources Using AWS Organizations you can programmatically create new AWS accounts and allocate resources group accounts to organize your workflows apply policies to accounts or groups for governance and simplify billing by using a single payment method for all of your accounts In addition AWS Organizations is integrated with other AWS services so you can define central configurations security mechanisms audit requirements and resource sharing across accounts in your organization AWS Organizations is available to all AWS customers at no additional charge AWS OpsWorks AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers OpsWorks lets you use Chef and Puppet to automate how servers are configured deployed and managed across your Amazon EC2 instances or onpremises compute environments OpsWorks has three offerings AWS OpsWorks for Chef Automate AWS OpsWorks for Puppet Enterprise and AWS OpsWorks Stacks AWS Proton AWS Proton is the first fully managed delivery service for container and serverless applications Platform engineering teams can use AWS Proton to connect and coordinate all the different tools needed for infrastructure provisioning code deployments monitoring and updates Maintaining hundreds – or sometimes thousands – of microservices with constantly changing infrastructure resources and continuous integration/continuous delivery (CI/CD) configurations is a nearly impossible task for even the most capable platform teams AWS Proton solves this by giving platform teams the tools they need to manage this complexity and enforce consistent standards while making it easy for developers to deploy their code using containers and serverless technologies AWS Service Catalog AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multitier application architectures AWS Service Catalog allows you to 51Overview of Amazon Web Services AWS Whitepaper AWS Systems Manager centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need AWS Systems Manager AWS Systems Manager gives you visibility and control of your infrastructure on AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources With Systems Manager you can group resources like Amazon EC2 instances Amazon S3 buckets or Amazon RDS instances by application view operational data for monitoring and troubleshooting and take action on your groups of resources Systems Manager simplifies resource and application management shortens the time to detect and resolve operational problems and makes it easy to operate and manage your infrastructure securely at scale AWS Systems Manager contains the following tools: •Resource groups: Lets you create a logical group of resources associated with a particular workload such as different layers of an application stack or production versus development environments For example you can group different layers of an application such as the frontend web layer and the backend data layer Resource groups can be created updated or removed programmatically through the API •Insights Dashboard: Displays operational data that the AWS Systems Manager automatically aggregates for each resource group Systems Manager eliminates the need for you to navigate across multiple AWS consoles to view your operational data With Systems Manager you can view API call logs from AWS CloudTrail resource configuration changes from AWS Config software inventory and patch compliance status by resource group You can also easily integrate your Amazon CloudWatch Dashboards AWS Trusted Advisor notifications and AWS Personal Health Dashboard performance and availability alerts into your Systems Manager dashboard Systems Manager centralizes all relevant operational data so you can have a clear view of your infrastructure compliance and performance •Run Command: Provides a simple way of automating common administrative tasks like remotely executing shell scripts or PowerShell commands installing software updates or making changes to the configuration of OS software EC2 and instances and servers in your onpremises data center •State Manager: Helps you define and maintain consistent OS configurations such as firewall settings and antimalware definitions to comply with your policies You can monitor the configuration of a large set of instances specify a configuration policy for the instances and automatically apply updates or configuration changes •Inventory: Helps you collect and query configuration and inventory information about your instances and the software installed on them You can gather details about your instances such as installed applications DHCP settings agent detail and custom items You can run queries to track and audit your system configurations •Maintenance Window: Lets you define a recurring window of time to run administrative and maintenance tasks across your instances This ensures that installing patches and updates or making other configuration changes does not disrupt businesscritical operations This helps improve your application availability •Patch Manager: Helps you select and deploy operating system and software patches automatically across large groups of instances You can define a maintenance window so that patches are applied only during set times that fit your needs These capabilities help ensure that your software is always up to date and meets your compliance policies •Automation: Simplifies common maintenance and deployment tasks such as updating Amazon Machine Images (AMIs) Use the Automation feature to apply patches update drivers and agents or bake applications into your AMI using a streamlined repeatable and auditable process •Parameter Store: Provides an encrypted location to store important administrative information such as passwords and database strings The Parameter Store integrates with AWS KMS to make it easy to encrypt the information you keep in the Parameter Store 52Overview of Amazon Web Services AWS Whitepaper AWS Trusted Advisor •Distributor: Helps you securely distribute and install software packages such as software agents Systems Manager Distributor allows you to centrally store and systematically distribute software packages while you maintain control over versioning You can use Distributor to create and distribute software packages and then install them using Systems Manager Run Command and State Manager Distributor can also use AWS Identity and Access Management (IAM) policies to control who can create or update packages in your account You can use the existing IAM policy support for Systems Manager Run Command and State Manager to define who can install packages on your hosts •Session Manager: Provides a browserbased interactive shell and CLI for managing Windows and Linux EC2 instances without the need to open inbound ports manage SSH keys or use bastion hosts Administrators can grant and revoke access to instances through a central location by using AWS Identity and Access Management (IAM) policies This allows you to control which users can access each instance including the option to provide nonroot access to specified users Once access is provided you can audit which user accessed an instance and log each command to Amazon S3 or Amazon CloudWatch Logs using AWS CloudTrail AWS Trusted Advisor AWS Trusted Advisor is an online resource to help you reduce cost increase performance and improve security by optimizing your AWS environment Trusted Advisor provides realtime guidance to help you provision your resources following AWS best practices AWS Personal Health Dashboard AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that might affect you While the Service Health Dashboard displays the general status of AWS services Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources The dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for scheduled activities With Personal Health Dashboard alerts are automatically triggered by changes in the health of AWS resources giving you event visibility and guidance to help quickly diagnose and resolve issues AWS Managed Services AWS Managed Services provides ongoing management of your AWS infrastructure so you can focus on your applications By implementing best practices to maintain your infrastructure AWS Managed Services helps to reduce your operational overhead and risk AWS Managed Services automates common activities such as change requests monitoring patch management security and backup services and provides fulllifecycle services to provision run and support your infrastructure Our rigor and controls help to enforce your corporate and security infrastructure policies and enables you to develop solutions and applications using your preferred development approach AWS Managed Services improves agility reduces cost and unburdens you from infrastructure operations so you can direct resources toward differentiating your business AWS Console Mobile Application The AWS Console Mobile Application lets customers view and manage a select set of resources to support incident response while onthego The Console Mobile Application allows AWS customers to monitor resources through a dedicated dashboard and view configuration details metrics and alarms for select AWS services The Dashboard provides permitted users with a single view a resource's status with realtime data on Amazon CloudWatch Personal Health Dashboard and AWS Billing and Cost Management Customers can view ongoing issues and follow through to the relevant CloudWatch alarm screen for a detailed view with 53Overview of Amazon Web Services AWS Whitepaper AWS License Manager graphs and configuration options In addition customers can check on the status of specific AWS services view detailed resource screens and perform select actions AWS License Manager AWS License Manager makes it easier to manage licenses in AWS and onpremises servers from software vendors such as Microsoft SAP Oracle and IBM AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements and then enforces these rules when an instance of Amazon EC2 gets launched Administrators can use these rules to limit licensing violations such as using more licenses than an agreement stipulates or reassigning licenses to different servers on a shortterm basis The rules in AWS License Manager enable you to limit a licensing breach by physically stopping the instance from launching or by notifying administrators about the infringement Administrators gain control and visibility of all their licenses with the AWS License Manager dashboard and reduce the risk of noncompliance misreporting and additional costs due to licensing overages AWS License Manager integrates with AWS services to simplify the management of licenses across multiple AWS accounts IT catalogs and onpremises through a single AWS account License administrators can add rules in AWS Service Catalog which allows them to create and manage catalogs of IT services that are approved for use on all their AWS accounts Through seamless integration with AWS Systems Manager and AWS Organizations administrators can manage licenses across all the AWS accounts in an organization and onpremises environments AWS Marketplace buyers can also use AWS License Manager to track bring your own license (BYOL) software obtained from the Marketplace and keep a consolidated view of all their licenses AWS WellArchitected Tool The AWS WellArchitected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices The tool is based on the AWS WellArchitected Framework developed to help cloud architects build secure highperforming resilient and efficient application infrastructure This Framework provides a consistent approach for customers and partners to evaluate architectures has been used in tens of thousands of workload reviews conducted by the AWS solutions architecture team and provides guidance to help implement designs that scale with application needs over time To use this free tool available in the AWS Management Console just define your workload and answer a set of questions regarding operational excellence security reliability performance efficiency and cost optimization The AWS WellArchitected Tool then provides a plan on how to architect for the cloud using established best practices Media Services Topics •Amazon Elastic Transcoder (p 55) •Amazon Interactive Video Service (p 55) •Amazon Nimble Studio (p 55) •AWS Elemental Appliances & Software (p 55) •AWS Elemental MediaConnect (p 55) •AWS Elemental MediaConvert (p 56) •AWS Elemental MediaLive (p 56) •AWS Elemental MediaPackage (p 56) •AWS Elemental MediaStore (p 56) 54Overview of Amazon Web Services AWS Whitepaper Amazon Elastic Transcoder •AWS Elemental MediaTailor (p 56) Amazon Elastic Transcoder Amazon Elastic Transcoder is media transcoding in the cloud It is designed to be a highly scalable easy touse and costeffective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones tablets and PCs Amazon Interactive Video Service Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is quick and easy to set up and ideal for creating interactive video experiences Send your live streams to Amazon IVS using streaming software and the service does everything you need to make lowlatency live video available to any viewer around the world letting you focus on building interactive experiences alongside the live video You can easily customize and enhance the audience experience through the Amazon IVS player SDK and timed metadata APIs allowing you to build a more valuable relationship with your viewers on your own websites and applications Amazon Nimble Studio Amazon Nimble Studio empowers creative studios to produce visual effects animation and interactive content entirely in the cloud from storyboard sketch to final deliverable Rapidly onboard and collaborate with artists globally and create content faster with access to virtual workstations highspeed storage and scalable rendering across AWS’s global infrastructure AWS Elemental Appliances & Software AWS Elemental Appliances and Software solutions bring advanced video processing and delivery technologies into your data center colocation space or onpremises facility You can deploy AWS Elemental Appliances and Software to encode package and deliver video assets onpremises and seamlessly connect with cloudbased video infrastructure Designed for easy integration with AWS Cloud media solutions AWS Elemental Appliances and Software support video workloads that need to remain onpremises to accommodate physical camera and router interfaces managed network delivery or network bandwidth constraints AWS Elemental Live Server and Conductor come in two variants: readytodeploy appliances or AWS licensed software that you install on your own hardware AWS Elemental Link is a compact hardware device that sends live video to the cloud for encoding and delivery to viewers AWS Elemental MediaConnect AWS Elemental MediaConnect is a highquality transport service for live video Today broadcasters and content owners rely on satellite networks or fiber connections to send their highvalue content into the cloud or to transmit it to partners for distribution Both satellite and fiber approaches are expensive require long lead times to set up and lack the flexibility to adapt to changing requirements To be more nimble some customers have tried to use solutions that transmit live video on top of IP infrastructure but have struggled with reliability and security Now you can get the reliability and security of satellite and fiber combined with the flexibility agility and economics of IPbased networks using AWS Elemental MediaConnect MediaConnect enables you to build missioncritical live video workflows in a fraction of the time and cost of satellite or fiber services You can use MediaConnect to ingest live video from a remote event site (like a stadium) share video with a partner (like a cable TV distributor) or replicate a video stream for processing (like an overthe 55Overview of Amazon Web Services AWS Whitepaper AWS Elemental MediaConvert top service) MediaConnect combines reliable video transport highly secure stream sharing and real time network traffic and video monitoring that allow you to focus on your content not your transport infrastructure AWS Elemental MediaConvert AWS Elemental MediaConvert is a filebased video transcoding service with broadcastgrade features It allows you to easily create videoondemand (VOD) content for broadcast and multiscreen delivery at scale The service combines advanced video and audio capabilities with a simple web services interface and payasyougo pricing With AWS Elemental MediaConvert you can focus on delivering compelling media experiences without having to worry about the complexity of building and operating your own video processing infrastructure AWS Elemental MediaLive AWS Elemental MediaLive is a broadcastgrade live video processing service It lets you create high quality video streams for delivery to broadcast televisions and internetconnected multiscreen devices like connected TVs tablets smart phones and settop boxes The service works by encoding your live video streams in realtime taking a largersized live video source and compressing it into smaller versions for distribution to your viewers With AWS Elemental MediaLive you can easily set up streams for both live events and 24x7 channels with advanced broadcasting features high availability and pay asyougo pricing AWS Elemental MediaLive lets you focus on creating compelling live video experiences for your viewers without the complexity of building and operating broadcastgrade video processing infrastructure AWS Elemental MediaPackage AWS Elemental MediaPackage reliably prepares and protects your video for delivery over the Internet From a single video input AWS Elemental MediaPackage creates video streams formatted to play on connected TVs mobile phones computers tablets and game consoles It makes it easy to implement popular video features for viewers (startover pause rewind etc) like those commonly found on DVRs AWS Elemental MediaPackage can also protect your content using Digital Rights Management (DRM) AWS Elemental MediaPackage scales automatically in response to load so your viewers will always get a great experience without you having to accurately predict in advance the capacity you’ll need AWS Elemental MediaStore AWS Elemental MediaStore is an AWS storage service optimized for media It gives you the performance consistency and low latency required to deliver live streaming video content AWS Elemental MediaStore acts as the origin store in your video workflow Its high performance capabilities meet the needs of the most demanding media delivery workloads combined with longterm costeffective storage AWS Elemental MediaTailor AWS Elemental MediaTailor lets video providers insert individually targeted advertising into their video streams without sacrificing broadcastlevel qualityofservice With AWS Elemental MediaTailor viewers of your live or ondemand video each receive a stream that combines your content with ads personalized to them But unlike other personalized ad solutions with AWS Elemental MediaTailor your entire stream – video and ads – is delivered with broadcastgrade video quality to improve the experience for your viewers AWS Elemental MediaTailor delivers automated reporting based on both client and serverside ad delivery metrics making it easy to accurately measure ad impressions and viewer behavior You can easily monetize unexpected highdemand viewing events with no upfront costs using AWS Elemental MediaTailor It also improves ad delivery rates helping you make more money from every video and it works with a wider variety of content delivery networks ad decision servers and client devices 56Overview of Amazon Web Services AWS Whitepaper Migration and Transfer See also Amazon Kinesis Video Streams (p 12) Migration and Transfer Topics •AWS Application Migration Service (p 57) •AWS Migration Hub (p 57) •AWS Application Discovery Service (p 57) •AWS Database Migration Service (p 58) •AWS Server Migration Service (p 58) •AWS Snow Family (p 58) •AWS DataSync (p 59) •AWS Transfer Family (p 59) AWS Application Migration Service AWS Application Migration Service (AWS MGN) allows you to quickly realize the benefits of migrating applications to the cloud without changes and with minimal downtime AWS Application Migration Service minimizes timeintensive errorprone manual processes by automatically converting your source servers from physical virtual or cloud infrastructure to run natively on AWS It further simplifies your migration by enabling you to use the same automated process for a wide range of applications And by launching nondisruptive tests before migrating you can be confident that your most critical applications such as SAP Oracle and SQL Server will work seamlessly on AWS AWS Migration Hub AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions Using Migration Hub allows you to choose the AWS and partner migration tools that best fit your needs while providing visibility into the status of migrations across your portfolio of applications Migration Hub also provides key metrics and progress for individual applications regardless of which tools are being used to migrate them For example you might use AWS Database Migration Service AWS Server Migration Service and partner migration tools such as ATADATA ATAmotion CloudEndure Live Migration or RiverMeadow Server Migration Saas to migrate an application comprised of a database virtualized web servers and a bare metal server Using Migration Hub you can view the migration progress of all the resources in the application This allows you to quickly get progress updates across all of your migrations easily identify and troubleshoot any issues and reduce the overall time and effort spent on your migration projects AWS Application Discovery Service AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their onpremises data centers Planning data center migrations can involve thousands of workloads that are often deeply interdependent Server utilization data and dependency mapping are important early first steps in the migration process AWS Application Discovery Service collects and presents configuration usage and behavior data from your servers to help you better understand your workloads 57Overview of Amazon Web Services AWS Whitepaper AWS Database Migration Service The collected data is retained in encrypted format in an AWS Application Discovery Service data store You can export this data as a CSV file and use it to estimate the Total Cost of Ownership (TCO) of running on AWS and to plan your migration to AWS In addition this data is also available in AWS Migration Hub where you can migrate the discovered servers and track their progress as they get migrated to AWS AWS Database Migration Service AWS Database Migration Service helps you migrate databases to AWS easily and securely The source database remains fully operational during the migration minimizing downtime to applications that rely on the database The AWS Database Migration Service can migrate your data to and from most widely used commercial and opensource databases The service supports homogeneous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora PostgreSQL MySQL MariaDB Oracle SAP ASE and SQL Server enabling consolidation and easy analysis of data in the petabytescale data warehouse AWS Database Migration Service can also be used for continuous data replication with high availability AWS Server Migration Service AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of onpremises workloads to AWS AWS SMS allows you to automate schedule and track incremental replications of live server volumes making it easier for you to coordinate largescale server migrations AWS Snow Family The AWS Snow Family helps customers that need to run operations in austere nondata center environments and in locations where there's lack of consistent network connectivity The Snow Family comprises AWS Snowcone AWS Snowball and AWS Snowmobile and offers a number of physical devices and capacity points most with builtin computing capabilities These services help physically transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and computing capabilities AWS Snowcone AWS Snowcone is the smallest member of the AWS Snow Family of edge computing edge storage and data transfer devices weighing in at 45 pounds (21 kg) with 8 terabytes of usable storage Snowcone is ruggedized secure and purposebuilt for use outside of a traditional data center Its small form factor makes it a perfect fit for tight spaces or where portability is a necessity and network connectivity is unreliable You can use Snowcone in backpacks on first responders or for IoT vehicular and drone use cases You can execute compute applications at the edge and you can ship the device with data to AWS for offline data transfer or you can transfer data online with AWS DataSync from edge locations Like AWS Snowball Snowcone has multiple layers of security and encryption You can use either of these services to run edge computing workloads or to collect process and transfer data to AWS Snowcone is designed for data migration needs up to 8 terabytes per device and from spaceconstrained environments where AWS Snowball devices will not fit AWS Snowball AWS Snowball is an edge computing data migration and edge storage device that comes in two options Snowball Edge Storage Optimized devices provide both block storage and Amazon S3compatible object 58Overview of Amazon Web Services AWS Whitepaper AWS DataSync storage and 40 vCPUs They are well suited for local storage and large scaledata transfer Snowball Edge Compute Optimized devices provide 52 vCPUs block and object storage and an optional GPU for use cases like advanced machine learning and full motion video analysis in disconnected environments You can use these devices for data collection machine learning and processing and storage in environments with intermittent connectivity (like manufacturing industrial and transportation) or in extremely remote locations (like military or maritime operations) before shipping them back to AWS These devices may also be rack mounted and clustered together to build larger temporary installations Snowball supports specific Amazon EC2 instance types and AWS Lambda functions so you can develop and test in the AWS Cloud then deploy applications on devices in remote locations to collect pre process and ship the data to AWS Common use cases include data migrati AWS Snowmobile AWS Snowmobile is an exabytescale data transfer service used to move extremely large amounts of data to AWS You can transfer up to 100 PB per Snowmobile a 45foot long ruggedized shipping container pulled by a semitrailer truck Snowmobile makes it easy to move massive volumes of data to the cloud including video libraries image repositories or even a complete data center migration Transferring data with Snowmobile is secure fast and cost effective After an initial assessment a Snowmobile will be transported to your data center and AWS personnel will configure it for you so it can be accessed as a network storage target When your Snowmobile is on site AWS personnel will work with your team to connect a removable highspeed network switch from the Snowmobile to your local network Then you can begin your highspeed data transfer from any number of sources within your data center to the Snowmobile After your data is loaded the Snowmobile is driven back to AWS where your data is imported into Amazon S3 or S3 Glacier AWS Snowmobile uses multiple layers of security designed to protect your data including dedicated security personnel GPS tracking alarm monitoring 24/7 video surveillance and an optional escort security vehicle while in transit All data is encrypted with 256bit encryption keys managed through AWS KMS (p 70) and designed to ensure both security and full chain of custody of your data AWS DataSync AWS DataSync is a data transfer service that makes it easy for you to automate moving data between onpremises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS) DataSync automatically handles many of the tasks related to data transfers that can slow down migrations or burden your IT operations including running your own instances handling encryption managing scripts network optimization and data integrity validation You can use DataSync to transfer data at speeds up to 10 times faster than opensource tools DataSync uses an onpremises software agent to connect to your existing storage or file systems using the Network File System (NFS) protocol so you don’t have write scripts or modify your applications to work with AWS APIs You can use DataSync to copy data over AWS Direct Connect or internet links to AWS The service enables onetime data migrations recurring data processing workflows and automated replication for data protection and recovery Getting started with DataSync is easy: Deploy the DataSync agent on premises connect it to a file system or storage array select Amazon EFS or S3 as your AWS storage and start moving data You pay only for the data you copy AWS Transfer Family AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon EFS With support for Secure File Transfer Protocol (SFTP) File Transfer Protocol over SSL (FTPS) and File Transfer Protocol (FTP) the AWS Transfer Family helps you seamlessly migrate your file transfer workflows to AWS by integrating with existing authentication systems and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners or their applications With your data in Amazon S3 or Amazon EFS you can use it with AWS services for 59Overview of Amazon Web Services AWS Whitepaper Networking and Content Delivery processing analytics machine learning archiving as well as home directories and developer tools Getting started with the AWS Transfer Family is easy; there is no infrastructure to buy and set up Networking and Content Delivery Topics •Amazon API Gateway (p 60) •Amazon CloudFront (p 60) •Amazon Route 53 (p 60) •Amazon VPC (p 61) •AWS App Mesh (p 61) •AWS Cloud Map (p 62) •AWS Direct Connect (p 62) •AWS Global Accelerator (p 62) •AWS PrivateLink (p 63) •AWS Transit Gateway (p 63) •AWS VPN (p 63) •Elastic Load Balancing (p 63) Amazon API Gateway Amazon API Gateway is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale With a few clicks in the AWS Management Console you can create an API that acts as a “front door” for applications to access data business logic or functionality from your backend services such as workloads running on Amazon EC2 code running on AWS Lambda or any web application Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Amazon CloudFront Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data videos applications and APIs to customers globally with low latency high transfer speeds all within a developerfriendly environment CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure as well as other AWS services CloudFront works seamlessly with services including AWS Shield for DDoS mitigation Amazon S3 Elastic Load Balancing or Amazon EC2 as origins for your applications and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience You can get started with the Content Delivery Network in minutes using the same AWS tools that you're already familiar with: APIs AWS Management Console AWS CloudFormation CLIs and SDKs Amazon's CDN offers a simple payasyougo pricing model with no upfront fees or required longterm contracts and support for the CDN is included in your existing AWS Support subscription Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and costeffective way to route end 60Overview of Amazon Web Services AWS Whitepaper Amazon VPC users to Internet applications by translating human readable names such as wwwexamplecom into the numeric IP addresses such as 192021 that computers use to connect to each other Amazon Route 53 is fully compliant with IPv6 as well Amazon Route 53 effectively connects user requests to infrastructure running in AWS—such as EC2 instances Elastic Load Balancing load balancers or Amazon S3 buckets—and can also be used to route users to infrastructure outside of AWS You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints Amazon Route 53 traffic flow makes it easy for you to manage traffic globally through a variety of routing types including latencybased routing Geo DNS and weighted round robin—all of which can be combined with DNS Failover in order to enable a variety of lowlatency faulttolerant architectures Using Amazon Route 53 traffic flow’s simple visual editor you can easily manage how your end users are routed to your application’s endpoints—whether in a single AWS Region or distributed around the globe Amazon Route 53 also offers Domain Name Registration—you can purchase and manage domain names such as examplecom and Amazon Route 53 will automatically configure DNS settings for your domains Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications You can easily customize the network configuration for your VPC For example you can create a public facing subnet for your web servers that has access to the Internet and place your backend systems such as databases or application servers in a privatefacing subnet with no Internet access You can leverage multiple layers of security (including security groups and network access control lists) to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and leverage the AWS Cloud as an extension of your corporate data center AWS App Mesh AWS App Mesh makes it easy to monitor and control microservices running on AWS App Mesh standardizes how your microservices communicate giving you endtoend visibility and helping to ensure highavailability for your applications Modern applications are often composed of multiple microservices that each perform a specific function This architecture helps to increase the availability and scalability of the application by allowing each component to scale independently based on demand and automatically degrading functionality when a component fails instead of going offline Each microservice interacts with all the other microservices through an API As the number of microservices grows within an application it becomes increasingly difficult to pinpoint the exact location of errors reroute traffic after failures and safely deploy code changes Previously this has required you to build monitoring and control logic directly into your code and redeploy your microservices every time there are changes AWS App Mesh makes it easy to run microservices by providing consistent visibility and network traffic controls for every microservice in an application App Mesh removes the need to update application code to change how monitoring data is collected or traffic is routed between microservices App Mesh configures each microservice to export monitoring data and implements consistent communications control logic across your application This makes it easy to quickly pinpoint the exact location of errors and automatically reroute network traffic when there are failures or when code changes need to be deployed 61Overview of Amazon Web Services AWS Whitepaper AWS Cloud Map You can use App Mesh with Amazon ECS and Amazon EKS to better run containerized microservices at scale App Mesh uses the open source Envoy proxy making it compatible with a wide range of AWS partner and open source tools for monitoring microservices AWS Cloud Map AWS Cloud Map is a cloud resource discovery service With Cloud Map you can define custom names for your application resources and it maintains the updated location of these dynamically changing resources This increases your application availability because your web service always discovers the most uptodate locations of its resources Modern applications are typically composed of multiple services that are accessible over an API and perform a specific function Each service interacts with a variety of other resources such as databases queues object stores and customerdefined microservices and they also need to be able to find the location of all the infrastructure resources on which it depends in order to function You typically manually manage all these resource names and their locations within the application code However manual resource management becomes time consuming and errorprone as the number of dependent infrastructure resources increases or the number of microservices dynamically scale up and down based on traffic You can also use thirdparty service discovery products but this requires installing and managing additional software and infrastructure Cloud Map allows you to register any application resources such as databases queues microservices and other cloud resources with custom names Cloud Map then constantly checks the health of resources to make sure the location is uptodate The application can then query the registry for the location of the resources needed based on the application version and deployment environment AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment which in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connections AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021Q virtual LANS (VLANs) this dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as EC2 instances running within a VPC using private IP address space while maintaining network separation between the public and private environments Virtual interfaces can be reconfigured at any time to meet your changing needs AWS Global Accelerator AWS Global Accelerator is a networking service that improves the availability and performance of the applications that you offer to your global users Today if you deliver applications to your global users over the public internet your users might face inconsistent availability and performance as they traverse through multiple public networks to reach your application These public networks are often congested and each hop can introduce availability and performance risk AWS Global Accelerator uses the highly available and congestionfree AWS global network to direct internet traffic from your users to your applications on AWS making your users’ experience more consistent To improve the availability of your application you must monitor the health of your application endpoints and route traffic only to healthy endpoints AWS Global Accelerator improves application 62Overview of Amazon Web Services AWS Whitepaper AWS PrivateLink availability by continuously monitoring the health of your application endpoints and routing traffic to the closest healthy endpoints AWS Global Accelerator also makes it easier to manage your global applications by providing static IP addresses that act as a fixed entry point to your application hosted on AWS which eliminates the complexity of managing specific IP addresses for different AWS Regions and Availability Zones AWS Global Accelerator is easy to set up configure and manage AWS PrivateLink AWS PrivateLink simplifies the security of data shared with cloudbased applications by eliminating the exposure of data to the public Internet AWS PrivateLink provides private connectivity between VPCs AWS services and onpremises applications securely on the Amazon network AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture AWS Transit Gateway AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their onpremises networks to a single gateway As you grow the number of workloads running on AWS you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth Today you can connect pairs of Amazon VPCs using peering However managing pointtopoint connectivity across many Amazon VPCs without the ability to centrally manage the connectivity policies can be operationally costly and cumbersome For onpremises connectivity you need to attach your AWS VPN to each individual Amazon VPC This solution can be time consuming to build and hard to manage when the number of VPCs grows into the hundreds With AWS Transit Gateway you only have to create and manage a single connection from the central gateway in to each Amazon VPC onpremises data center or remote office across your network Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway This ease of connectivity makes it easy to scale your network as you grow AWS VPN AWS Virtual Private Network solutions establish secure connections between your onpremises networks remote offices client devices and the AWS global network AWS VPN is comprised of two services: AWS SitetoSite VPN and AWS Client VPN Each service provides a highlyavailable managed and elastic cloud VPN solution to protect your network traffic AWS SitetoSite VPN creates encrypted tunnels between your network and your Amazon Virtual Private Clouds or AWS Transit Gateways For managing remote access AWS Client VPN connects your users to AWS or onpremises resources using a VPN software client Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers and IP addresses It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones Elastic Load Balancing offers four types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant •Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including 63Overview of Amazon Web Services AWS Whitepaper Quantum Technologies microservices and containers Operating at the individual request level (Layer 7) Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request •Network Load Balancer is best suited for load balancing of TCP traffic where extreme performance is required Operating at the connection level (Layer 4) Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultralow latencies Network Load Balancer is also optimized to handle sudden and volatile traffic patterns •Gateway Load Balancer makes it easy to deploy scale and run thirdparty virtual networking appliances Providing load balancing and auto scaling for fleets of thirdparty appliances Gateway Load Balancer is transparent to the source and destination of traffic This capability makes it well suited for working with thirdparty appliances for security network analytics and other use cases •Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level Classic Load Balancer is intended for applications that were built within the EC2Classic network Quantum Technologies Amazon Braket Amazon Braket is a fully managed quantum computing service that helps researchers and developers get started with the technology to accelerate research and discovery Amazon Braket provides a development environment for you to explore and build quantum algorithms test them on quantum circuit simulators and run them on different quantum hardware technologies Quantum computing has the potential to solve computational problems that are beyond the reach of classical computers by harnessing the laws of quantum mechanics to process information in new ways This approach to computing could transform areas such as chemical engineering material science drug discovery financial portfolio optimization and machine learning But defining those problems and programming quantum computers to solve them requires new skills which are difficult to acquire without easy access to quantum computing hardware Amazon Braket overcomes these challenges so you can explore quantum computing With Amazon Braket you can design and build your own quantum algorithms from scratch or choose from a set of pre built algorithms Once you have built your algorithm Amazon Braket provides a choice of simulators to test troubleshoot and run your algorithms When you are ready you can run your algorithm on your choice of different quantum computers including quantum annealers from DWave and gatebased computers from Rigetti and IonQ With Amazon Braket you can now evaluate the potential of quantum computing for your organization and build expertise Robotics AWS RoboMaker AWS RoboMaker is a service that makes it easy to develop test and deploy intelligent robotics applications at scale RoboMaker extends the most widely used opensource robotics software framework Robot Operating System (ROS) with connectivity to cloud services This includes AWS machine learning services monitoring services and analytics services that enable a robot to stream data navigate communicate comprehend and learn RoboMaker provides a robotics development environment for application development a robotics simulation service to accelerate application testing and a robotics fleet management service for remote application deployment update and management 64Overview of Amazon Web Services AWS Whitepaper Satellite Robots are machines that sense compute and take action Robots need instructions to accomplish tasks and these instructions come in the form of applications that developers code to determine how the robot will behave Receiving and processing sensor data controlling actuators for movement and performing a specific task are all functions that are typically automated by these intelligent robotics applications Intelligent robots are being increasingly used in warehouses to distribute inventory in homes to carry out tedious housework and in retail stores to provide customer service Robotics applications use machine learning in order to perform more complex tasks like recognizing an object or face having a conversation with a person following a spoken command or navigating autonomously Until now developing testing and deploying intelligent robotics applications was difficult and time consuming Building intelligent robotics functionality using machine learning is complex and requires specialized skills Setting up a development environment can take each developer days and building a realistic simulation system to test an application can take months due to the underlying infrastructure needed Once an application has been developed and tested a developer needs to build a deployment system to deploy the application into the robot and later update the application while the robot is in use AWS RoboMaker provides you with the tools to make building intelligent robotics applications more accessible a fully managed simulation service for quick and easy testing and a deployment service for lifecycle management AWS RoboMaker removes the heavy lifting from each step of robotics development so you can focus on creating innovative robotics applications Satellite AWS Ground Station AWS Ground Station is a fully managed service that lets you control satellite communications downlink and process satellite data and scale your satellite operations quickly easily and costeffectively without having to worry about building or managing your own ground station infrastructure Satellites are used for a wide variety of use cases including weather forecasting surface imaging communications and video broadcasts Ground stations are at the core of global satellite networks which are facilities that provide communications between the ground and the satellites by using antennas to receive data and control systems to send radio signals to command and control the satellite Today you must either build your own ground stations and antennas or obtain longterm leases with ground station providers often in multiple countries to provide enough opportunities to contact the satellites as they orbit the globe Once all this data is downloaded you need servers storage and networking in close proximity to the antennas to process store and transport the data from the satellites AWS Ground Station eliminates these problems by delivering a global Ground Station as a Service We provide direct access to AWS services and the AWS Global Infrastructure including our lowlatency global fiber network right where your data is downloaded into our AWS Ground Station This enables you to easily control satellite communications quickly ingest and process your satellite data and rapidly integrate that data with your applications and other services running in the AWS Cloud For example you can use Amazon S3 to store the downloaded data Amazon Kinesis Data Streams for managing data ingestion from satellites SageMaker for building custom machine learning applications that apply to your data sets and Amazon EC2 to command and download data from satellites AWS Ground Station can help you save up to 80% on the cost of your ground station operations by allowing you to pay only for the actual antenna time used and relying on our global footprint of ground stations to download data when and where you need it instead of building and operating your own global ground station infrastructure There are no longterm commitments and you gain the ability to rapidly scale your satellite communications ondemand when your business needs it Security Identity and Compliance Topics 65Overview of Amazon Web Services AWS Whitepaper Amazon Cognito •Amazon Cognito (p 66) •Amazon Cloud Directory (p 66) •Amazon Detective (p 67) •Amazon GuardDuty (p 67) •Amazon Inspector (p 67) •Amazon Macie (p 68) •AWS Artifact (p 68) •AWS Audit Manager (p 68) •AWS Certificate Manager (p 68) •AWS CloudHSM (p 69) •AWS Directory Service (p 69) •AWS Firewall Manager (p 69) •AWS Identity and Access Management (p 69) •AWS Key Management Service (p 70) •AWS Network Firewall (p 70) •AWS Resource Access Manager (p 70) •AWS Secrets Manager (p 71) •AWS Security Hub (p 71) •AWS Shield (p 71) •AWS Single SignOn (p 72) •AWS WAF (p 72) Amazon Cognito Amazon Cognito lets you add user signup signin and access control to your web and mobile apps quickly and easily With Amazon Cognito you also have the option to authenticate users through social identity providers such as Facebook Twitter or Amazon with SAML identity solutions or by using your own identity system In addition Amazon Cognito enables you to save data locally on users’ devices allowing your applications to work even when the devices are offline You can then synchronize data across users’ devices so that their app experience remains consistent regardless of the device they use With Amazon Cognito you can focus on creating great app experiences instead of worrying about building securing and scaling a solution to handle user management authentication and sync across devices Amazon Cloud Directory Amazon Cloud Directory enables you to build flexible cloudnative directories for organizing hierarchies of data along multiple dimensions With Cloud Directory you can create directories for a variety of use cases such as organizational charts course catalogs and device registries While traditional directory solutions such as Active Directory Lightweight Directory Services (AD LDS) and other LDAPbased directories limit you to a single hierarchy Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions For example you can create an organizational chart that can be navigated through separate hierarchies for reporting structure location and cost center Amazon Cloud Directory automatically scales to hundreds of millions of objects and provides an extensible schema that can be shared with multiple applications As a fullymanaged service Cloud Directory eliminates timeconsuming and expensive administrative tasks such as scaling infrastructure 66Overview of Amazon Web Services AWS Whitepaper Amazon Detective and managing servers You simply define the schema create a directory and then populate your directory by making calls to the Cloud Directory API Amazon Detective Amazon Detective makes it easy to analyze investigate and quickly identify the root cause of potential security issues or suspicious activities Amazon Detective automatically collects log data from your AWS resources and uses machine learning statistical analysis and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations AWS security services like Amazon GuardDuty Amazon Macie and AWS Security Hub as well as partner security products can be used to identify potential security issues or findings These services are really helpful in alerting you when something is wrong and pointing out where to go to fix it But sometimes there might be a security finding where you need to dig a lot deeper and analyze more information to isolate the root cause and take action Determining the root cause of security findings can be a complex process that often involves collecting and combining logs from many separate data sources using extract transform and load (ETL) tools or custom scripting to organize the data and then security analysts having to analyze the data and conduct lengthy investigations Amazon Detective simplifies this process by enabling your security teams to easily investigate and quickly get to the root cause of a finding Amazon Detective can analyze trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs AWS CloudTrail and Amazon GuardDuty and automatically creates a unified interactive view of your resources users and the interactions between them over time With this unified view you can visualize all the details and context in one place to identify the underlying reasons for the findings drill down into relevant historical activities and quickly determine the root cause You can get started with Amazon Detective in just a few clicks in the AWS Console There is no software to deploy or data sources to enable and maintain Amazon GuardDuty Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise GuardDuty also detects potentially compromised instances or reconnaissance by attackers Enabled with a few clicks in the AWS Management Console Amazon GuardDuty can immediately begin analyzing billions of events across your AWS accounts for signs of risk GuardDuty identifies suspected attackers through integrated threat intelligence feeds and uses machine learning to detect anomalies in account and workload activity When a potential threat is detected the service delivers a detailed security alert to the GuardDuty console and Amazon CloudWatch Events This makes alerts actionable and easy to integrate into existing event management and workflow systems Amazon GuardDuty is cost effective and easy It does not require you to deploy and maintain software or security infrastructure meaning it can be enabled quickly with no risk of negatively impacting existing application workloads There are no upfront costs with GuardDuty no software to deploy and no threat intelligence feeds required Customers pay for the events analyzed by GuardDuty and there is a 30day free trial available for every new account to the service Amazon Inspector Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS Amazon Inspector automatically assesses applications for exposure vulnerabilities and deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity These findings 67Overview of Amazon Web Services AWS Whitepaper Amazon Macie can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances Amazon Inspector assessments are offered to you as predefined rules packages mapped to common security best practices and vulnerability definitions Examples of builtin rules include checking for access to your EC2 instances from the internet remote root login being enabled or vulnerable software versions installed These rules are regularly updated by AWS security researchers Amazon Macie Amazon Macie is a security service that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks AWS Artifact AWS Artifact is your goto central resource for compliancerelated information that matters to you It provides ondemand access to AWS’ security and compliance reports and select online agreements Reports available in AWS Artifact include our Service Organization Control (SOC) reports Payment Card Industry (PCI) reports and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA) AWS Audit Manager AWS Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards Audit Manager automates evidence collection to reduce the “all hands on deck” manual effort that often happens for audits and enable you to scale your audit capability in the cloud as your business grows With Audit Manager it is easy to assess if your policies procedures and activities – also known as controls – are operating effectively When it is time for an audit AWS Audit Manager helps you manage stakeholder reviews of your controls and enables you to build auditready reports with much less manual effort AWS Audit Manager’s prebuilt frameworks help translate evidence from cloud services into auditor friendly reports by mapping your AWS resources to the requirements in industry standards or regulations such as CIS AWS Foundations Benchmark the General Data Protection Regulation (GDPR) and the Payment Card Industry Data Security Standard (PCI DSS) You can also fully customize a framework and its controls for your unique business requirements Based on the framework you select Audit Manager launches an assessment that continuously collects and organizes relevant evidence from your AWS accounts and resources such as resource configuration snapshots user activity and compliance check results You can get started quickly in the AWS Management Console Just select a prebuilt framework to launch an assessment and begin automatically collecting and organizing evidence AWS Certificate Manager AWS Certificate Manager is a service that lets you easily provision manage and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal 68Overview of Amazon Web Services AWS Whitepaper AWS CloudHSM connected resources SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks AWS Certificate Manager removes the timeconsuming manual process of purchasing uploading and renewing SSL/TLS certificates With AWS Certificate Manager you can quickly request a certificate deploy it on ACMintegrated AWS resources such as Elastic Load Balancing Amazon CloudFront distributions and APIs on API Gateway and let AWS Certificate Manager handle certificate renewals It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally Public and private certificates provisioned through AWS Certificate Manager for use with ACMintegrated services are free You pay only for the AWS resources you create to run your application With AWS Certificate Manager Private Certificate Authority you pay monthly for the operation of the private CA and for the private certificates you issue AWS CloudHSM The AWS CloudHSM is a cloudbased hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud With CloudHSM you can manage your own encryption keys using FIPS 1402 Level 3 validated HSMs CloudHSM offers you the flexibility to integrate with your applications using industrystandard APIs such as PKCS#11 Java Cryptography Extensions (JCE) and Microsoft CryptoNG (CNG) libraries CloudHSM is standardscompliant and enables you to export all of your keys to most other commerciallyavailable HSMs subject to your configurations It is a fullymanaged service that automates timeconsuming administrative tasks for you such as hardware provisioning software patching highavailability and backups CloudHSM also enables you to scale quickly by adding and removing HSM capacity ondemand with no upfront costs AWS Directory Service AWS Directory Service for Microsoft Active Directory also known as AWS Managed Microsoft AD enables your directoryaware workloads and AWS resources to use managed Active Directory in the AWS Cloud AWS Managed Microsoft AD is built on actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud You can use standard Active Directory administration tools and take advantage of builtin Active Directory features such as Group Policy and single signon (SSO) With AWS Managed Microsoft AD you can easily join Amazon EC2 and Amazon RDS for SQL Server instances to a domain and use AWS Enterprise IT applications such as Amazon WorkSpaces with Active Directory users and groups AWS Firewall Manager AWS Firewall Manager is a security management service that makes it easier to centrally configure and manage AWS WAF rules across your accounts and applications Using Firewall Manager you can easily roll out AWS WAF rules for your Application Load Balancers and Amazon CloudFront distributions across accounts in AWS Organizations As new applications are created Firewall Manager also makes it easy to bring new applications and resources into compliance with a common set of security rules from day one Now you have a single service to build firewall rules create security policies and enforce them in a consistent hierarchical manner across your entire Application Load Balancers and Amazon CloudFront infrastructure AWS Identity and Access Management AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources IAM allows you to do the following: 69Overview of Amazon Web Services AWS Whitepaper AWS Key Management Service •Manage IAM users and their access: You can create users in IAM assign them individual security credentials (access keys passwords and multifactor authentication devices) or request temporary security credentials to provide users access to AWS services and resources You can manage permissions in order to control which operations a user can perform •Manage IAM roles and their permissions : You can create roles in IAM and manage permissions to control which operations can be performed by the entity or AWS service that assumes the role You can also define which entity is allowed to assume the role •Manage federated users and their permissions : You can enable identity federation to allow existing identities (users groups and roles) in your enterprise to access the AWS Management Console call AWS APIs and access resources without the need to create an IAM user for each identity AWS Key Management Service AWS Key Management Service (KMS) makes it easy for you to create and manage keys and control the use of encryption across a wide range of AWS services and in your applications AWS KMS is a secure and resilient service that uses FIPS 1402 validated hardware security modules to protect your keys AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs AWS Network Firewall AWS Network Firewall is a managed service that makes it easy to deploy essential network protections for all of your Amazon Virtual Private Clouds (VPCs) The service can be setup with just a few clicks and scales automatically with your network traffic so you don't have to worry about deploying and managing any infrastructure AWS Network Firewall’s flexible rules engine lets you define firewall rules that give you finegrained control over network traffic such as blocking outbound Server Message Block (SMB) requests to prevent the spread of malicious activity You can also import rules you’ve already written in common open source rule formats as well as enable integrations with managed intelligence feeds sourced by AWS partners AWS Network Firewall works together with AWS Firewall Manager so you can build policies based on AWS Network Firewall rules and then centrally apply those policies across your VPCs and accounts AWS Network Firewall includes features that provide protections from common network threats AWS Network Firewall’s stateful firewall can incorporate context from traffic flows like tracking connections and protocol identification to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol AWS Network Firewall’s intrusion prevention system (IPS) provides active traffic flow inspection so you can identify and block vulnerability exploits using signaturebased detection AWS Network Firewall also offers web filtering that can stop traffic to known bad URLs and monitor fully qualified domain names It’s easy to get started with AWS Network Firewall by visiting the Amazon VPC Console to create or import your firewall rules group them into policies and apply them to the VPCs you want to protect AWS Network Firewall pricing is based on the number of firewalls deployed and the amount of traffic inspected There are no upfront commitments and you pay only for what you use AWS Resource Access Manager AWS Resource Access Manager (RAM) helps you securely share your resources across AWS accounts within your organization or organizational units (OUs) in AWS Organizations and with IAM roles and IAM users for supported resource types You can use AWS RAM to share transit gateways subnets AWS License Manager license configurations Amazon Route 53 Resolver rules and more resource types Many organizations use multiple accounts to create administrative or billing isolation and to limit the impact of errors With AWS RAM you don’t need to create duplicate resources in multiple AWS accounts 70Overview of Amazon Web Services AWS Whitepaper AWS Secrets Manager This reduces the operational overhead of managing resources in every account that you own Instead in your multiaccount environment you can create a resource once and use AWS RAM to share that resource across accounts by creating a resource share When you create a resource share you select the resources to share choose an AWS RAM managed permission per resource type and specify whom you want to have access to the resources AWS RAM is available to you at no additional charge AWS Secrets Manager AWS Secrets Manager helps you protect secrets needed to access your applications services and IT resources The service enables you to easily rotate manage and retrieve database credentials API keys and other secrets throughout their lifecycle Users and applications retrieve secrets with a call to Secrets Manager APIs eliminating the need to hardcode sensitive information in plain text Secrets Manager offers secret rotation with builtin integration for Amazon RDS for MySQL PostgreSQL and Amazon Aurora Also the service is extensible to other types of secrets including API keys and OAuth tokens In addition Secrets Manager enables you to control access to secrets using finegrained permissions and audit secret rotation centrally for resources in the AWS Cloud thirdparty services and onpremises AWS Security Hub AWS Security Hub gives you a comprehensive view of your highpriority security alerts and compliance status across AWS accounts There are a range of powerful security tools at your disposal from firewalls and endpoint protection to vulnerability and compliance scanners But oftentimes this leaves your team switching backandforth between these tools to deal with hundreds and sometimes thousands of security alerts every day With Security Hub you now have a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services such as Amazon GuardDuty Amazon Inspector and Amazon Macie as well as from AWS Partner solutions Your findings are visually summarized on integrated dashboards with actionable graphs and tables You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows Get started with AWS Security Hub just a few clicks in the Management Console and once enabled Security Hub will begin aggregating and prioritizing findings AWS Shield AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS AWS Shield provides you with alwayson detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection There are two tiers of AWS Shield: Standard and Advanced All AWS customers benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against most common frequently occurring network and transport layer DDoS attacks that target your website or applications When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53 you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 resources you can subscribe to AWS Shield Advanced In addition to the network and transport layer protections that come with Standard AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks near realtime visibility into attacks and integration with AWS WAF a web application firewall AWS Shield Advanced also gives you 24x7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (Amazon EC2) Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 charges AWS Shield Advanced is available globally on all Amazon CloudFront and Amazon Route 53 edge locations You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your application Your origin servers can be Amazon S3 Amazon Elastic Compute 71Overview of Amazon Web Services AWS Whitepaper AWS Single SignOn Cloud (Amazon EC2) Elastic Load Balancing (ELB) or a custom server outside of AWS You can also enable AWS Shield Advanced directly on an Elastic IP or Elastic Load Balancing (ELB) in the following AWS Regions: Northern Virginia Ohio Oregon Northern California Montreal São Paulo Ireland Frankfurt London Paris Stockholm Singapore Tokyo Sydney Seoul and Mumbai AWS Single SignOn AWS Single SignOn (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications With just a few clicks you can enable a highly available SSO service without the upfront investment and ongoing maintenance costs of operating your own SSO infrastructure With AWS SSO you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally AWS SSO also includes builtin SAML integrations to many business applications such as Salesforce Box and Microsoft Office 365 Further by using the AWS SSO application configuration wizard you can create Security Assertion Markup Language (SAML) 20 integrations and extend SSO access to any of your SAMLenabled applications Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place AWS WAF AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability compromise security or consume excessive resources AWS WAF gives you control over which traffic to allow or block to your web application by defining customizable web security rules You can use AWS WAF to create custom rules that block common attack patterns such as SQL injection or crosssite scripting and rules that are designed for your specific application New rules can be deployed within minutes letting you respond quickly to changing traffic patterns Also AWS WAF includes a fullfeatured API that you can use to automate the creation deployment and maintenance of web security rules Storage Topics •Amazon Elastic Block Store (p 72) •Amazon Elastic File System (p 73) •Amazon FSx for Lustre (p 73) •Amazon FSx for Windows File Server (p 73) •Amazon Simple Storage Service (p 74) •Amazon S3 Glacier (p 74) •AWS Backup (p 74) •AWS Storage Gateway (p 74) Amazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes offer the consistent and lowlatency performance needed to run your workloads With Amazon EBS you can scale your usage up or down within minutes—all while paying a low price for only what you provision 72Overview of Amazon Web Services AWS Whitepaper Amazon Elastic File System Amazon Elastic File System Amazon Elastic File System (Amazon EFS) provides a simple scalable elastic file system for Linuxbased workloads for use with AWS Cloud services and onpremises resources It is built to scale on demand to petabytes without disrupting applications growing and shrinking automatically as you add and remove files so your applications have the storage they need – when they need it It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies Amazon EFS is a fully managed service that requires no changes to your existing applications and tools providing access through a standard file system interface for seamless integration Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability You can access your file systems across AZs and AWS Regions and share files between thousands of Amazon EC2 instances and onpremises servers via AWS Direct Connect or AWS VPN Amazon EFS is well suited to support a broad spectrum of use cases from highly parallelized scaleout workloads that require the highest possible throughput to singlethreaded latencysensitive workloads Use cases such as liftandshift enterprise applications big data analytics web serving and content management application development and testing media and entertainment workflows database backups and container storage Amazon FSx for Lustre Amazon FSx for Lustre is a fully managed file system that is optimized for computeintensive workloads such as high performance computing machine learning and media data processing workflows Many of these applications require the highperformance and low latencies of scaleout parallel file systems Operating these file systems typically requires specialized expertise and administrative overhead requiring you to provision storage servers and tune complex performance parameters With Amazon FSx you can launch and run a Lustre file system that can process massive data sets at up to hundreds of gigabytes per second of throughput millions of IOPS and submillisecond latencies Amazon FSx for Lustre is seamlessly integrated with Amazon S3 making it easy to link your long term data sets with your high performance file systems to run computeintensive workloads You can automatically copy data from S3 to FSx for Lustre run your workloads and then write results back to S3 FSx for Lustre also enables you to burst your computeintensive workloads from onpremises to AWS by allowing you to access your FSx file system over Amazon Direct Connect or VPN FSx for Lustre helps you costoptimize your storage for computeintensive workloads: It provides cheap and performant non replicated storage for processing data with your longterm data stored durably in Amazon S3 or other lowcost data stores With Amazon FSx you pay for only the resources you use There are no minimum commitments upfront hardware or software costs or additional fees Amazon FSx for Windows File Server Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windowsbased applications that require file storage to AWS Built on Windows Server Amazon FSx provides shared file storage with the compatibility and features that your Windows based applications rely on including full support for the SMB protocol and Windows NTFS Active Directory (AD) integration and Distributed File System (DFS) Amazon FSx uses SSD storage to provide the fast performance your Windows applications and users expect with high levels of throughput and IOPS and consistent submillisecond latencies This compatibility and performance is particularly important when moving workloads that require Windows shared file storage like CRM ERP and NET applications as well as home directories With Amazon FSx you can launch highly durable and available Windows file systems that can be accessed from up to thousands of compute instances using the industrystandard SMB protocol Amazon FSx eliminates the typical administrative overhead of managing Windows file servers You pay for only the resources used with no upfront costs minimum commitments or additional fees 73Overview of Amazon Web Services AWS Whitepaper Amazon Simple Storage Service Amazon Simple Storage Service Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability data availability security and performance This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases such as websites mobile applications backup and restore archive enterprise applications IoT devices and big data analytics Amazon S3 provides easytouse management features so you can organize your data and configure finelytuned access controls to meet your specific business organizational and compliance requirements Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies all around the world Amazon S3 Glacier Amazon S3 Glacier is a secure durable and extremely lowcost storage service for data archiving and longterm backup It is designed to deliver 99999999999% durability and provides comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements Amazon S3 Glacier provides queryinplace functionality allowing you to run powerful analytics directly on your archive data at rest You can store data for as little as $1 per terabyte per month a significant savings compared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon S3 Glacier provides three options for access to archives from a few minutes to several hours and S3 Glacier Deep Archive provides two access options ranging from 12 to 48 hours AWS Backup AWS Backup enables you to centralize and automate data protection across AWS services AWS Backup offers a costeffective fully managed policybased service that further simplifies data protection at scale AWS Backup also helps you support your regulatory compliance or business policies for data protection Together with AWS Organizations AWS Backup enables you to centrally deploy data protection policies to configure manage and govern your backup activity across your organization’s AWS accounts and resources including Amazon Elastic Compute Cloud (Amazon EC2) instances Amazon Elastic Block Store (Amazon EBS) volumes Amazon Relational Database Service (Amazon RDS) databases (including Amazon Aurora clusters) Amazon DynamoDB tables Amazon Elastic File System (Amazon EFS) file systems Amazon FSx for Lustre file systems Amazon FSx for Windows File Server file systems and AWS Storage Gateway volumes AWS Storage Gateway The AWS Storage Gateway is a hybrid storage service that enables your onpremises applications to seamlessly use AWS cloud storage You can use the service for backup and archiving disaster recovery cloud data processing storage tiering and migration Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols such as NFS SMB and iSCSI The gateway connects to AWS storage services such as Amazon S3 S3 Glacier and Amazon EBS providing storage for files volumes and virtual tapes in AWS The service includes a highlyoptimized data transfer mechanism with bandwidth management automated network resilience and efficient data transfer along with a local cache for lowlatency onpremises access to your most active data 74Overview of Amazon Web Services AWS Whitepaper Conclusion Next Steps Reinvent how you work with IT by signing up for the AWS Free Tier which enables you to gain handson experience with a broad selection of AWS products and services Within the AWS Free Tier you can test workloads and run applications to learn more and build the right solution for your organization You can also contact AWS Sales and Business Development By signing up for AWS you have access to Amazon’s cloud computing services Note: The signup process requires a credit card which will not be charged until you start using services There are no longterm commitments and you can stop using AWS at any time To help familiarize you with AWS view these short videos that cover topics like creating an account launching a virtual server storing media and more Learn about the breadth and depth of AWS on our general AWS Channel and AWS Online Tech Talks Get hands on experience from our selfpaced labs Conclusion AWS provides building blocks that you can assemble quickly to support virtually any workload With AWS you’ll find a complete set of highly available services that are designed to work together to build sophisticated scalable applications You have access to highly durable storage lowcost compute highperformance databases management tools and more All this is available without upfront cost and you pay for only what you use These services help organizations move faster lower IT costs and scale AWS is trusted by the largest enterprises and the hottest startups to power a wide variety of workloads including web and mobile applications game development data processing and warehousing storage archive and many others 75Overview of Amazon Web Services AWS Whitepaper Resources •AWS Architecture Center •AWS Whitepapers •AWS Architecture Monthly •AWS Architecture Blog •This Is My Architecture videos •AWS Documentation 76Overview of Amazon Web Services AWS Whitepaper Contributors Document Details Contributors The following individuals and organizations contributed to this document: •Sajee Mathew AWS Principal Solutions Architect Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 77) Added new services and updated information throughoutAugust 5 2021 Minor update (p 77) Minor text updates to improve accuracy and fix linksApril 12 2021 Minor update (p 77) Minor text updates to improve accuracyNovember 20 2020 Minor update (p 77) Fixed incorrect link November 19 2020 Minor update (p 77) Fixed incorrect link August 11 2020 Minor update (p 77) Fixed incorrect link July 17 2020 Minor updates (p 77) Minor text updates to improve accuracyJanuary 1 2020 Minor updates (p 77) Minor text updates to improve accuracyOctober 1 2019 Whitepaper updated (p 77) Added new services and updated information throughoutDecember 1 2018 Whitepaper updated (p 77) Added new services and updated information throughoutApril 1 2017 Initial publication (p 77) Overview of Amazon Web Services publishedJanuary 1 2014 77Overview of Amazon Web Services AWS Whitepaper AWS glossary For the latest AWS terminology see the AWS glossary in the AWS General Reference 78
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Analytics_Mobile_and_Application_Services
|
ArchivedOverview of AWS Security Analytics Services Mobile and Applications Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 13 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 13 Analytics Services Amazon Web Services provides cloud based analytics services to help you process and analyze any volume of data whether your need is for managed Hadoop clusters real time streaming data petabyte scale data w arehousing or orchestration Amazon Elastic MapReduce (Amazon EMR) Security Amazon Elastic MapReduce (Amazon EMR) is a managed web service you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among severa l servers It utilizes an enhanced version of the Apache Hadoop framework running on the webscale infrastructure of Amazon EC2 and Amazon S3 You simply upload your input data and a data processing application into Amazon S3 Amazon EMR then launches the number of Amazon EC2 instances you specify The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances Once the job flow is finished Amazon EMR transfers the output data to Amazon S3 where you can then retrieve it or use it as input in another job flow When launching job flows on your behalf Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves The master security group has a port open f or communication with the service It also has the SSH port open to allow you to SSH into the instances using the key specified at startup The slaves start in a separate security group which only allows interaction with the master instance By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers Since these are security groups within your account you can reconfigure them using the standard EC2 tools or dashboard To protect customer input and output datasets Amazon EMR transfers data to and from Amazon S3 using SSL Amazon EMR provides several ways to control access to the resources of your cluster You can use AWS IAM to create user accounts and roles and config ure permissions that control which AWS features those users and roles can access When you launch a cluster you can associate an Amazon EC2 key pair with the cluster which you can then use when you connect to the cluster using SSH You can also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster By default if an IAM user launches a cluster that cluster is hidden from other IAM users on the AWS account This filtering occurs on all Amazon EMR interfaces— the console CLI API and SDKs —and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users It is useful for clusters that are intended to be viewed by only a single IAM user and the main AWS account You also h ave the option to make a cluster visible and accessible to all IAM users under a single AWS account For an additional layer of protection you can launch the EC2 instances of your EMR cluster into an Amazon VPC which is like launching it into a private subnet This allows you to control access to the entire subnetwork You can also launch the cluster into a VPC and enable the cluster to access resources on your internal network using a VPN connection You can encrypt the input data before you upload it t o Amazon S3 using any common data encryption tool If you do encrypt the data before it is uploaded you then need to add a decryption step to the beginning of your job flow when Amazon Elastic MapReduce fetches the data from Amazon S3 Archived Page 4 of 13 Amazon Kinesis Security Amazon Kinesis is a managed service designed to handle real time streaming of big data It can accept any amount of data from any number of sources scaling up and down as needed You can use Kinesis in situations that call for large scale real time data ingestion and processing such as server logs social media or market data feeds and web clickstream data Applications read and write data records to Amazon Kinesis in streams You can create any number of Kinesis streams to capture store and transport data Amazon Kinesis automatically manages the infrastructure storage networking and configuration needed to collect and process your data at the level of throughput your streaming applications need You don’t have to worry about provisioning deployment or ongoing maintenance of hardware software or other services to enable real time capture and storage of large scale data Amazon Kinesis also synchronously replicates data across three facilities in an AWS Region providing high availabili ty and data durability In Amazon Kinesis data records contain a sequence number a partition key and a data blob which is an un interpreted immutable sequence of bytes The Amazon Kinesis service does not inspect interpret or change the data in the blob in any way Data records are accessible for only 24 hours from the time they are added to an Amazon Kinesis stream and then they are automatically discarded Your application is a consumer of an Amazon Kinesis stream which typically runs on a flee t of Amazon EC2 instances A Kinesis application uses the Amazon Kinesis Client Library to read from the Amazon Kinesis stream The Kinesis Client Library takes care of a variety of details for you including failover recovery and load balancing allowing your application to focus on processing the data as it becomes availabl e After processing the record your consumer code can pass it along to another Kinesis stream; write it to an Amazon S3 bucket a Redshift data warehouse or a DynamoDB table; or simply discard it A connector library is available to help you integrate Kinesis with other AWS services (such as DynamoDB Redshift and Amazon S3) as well as third party products like Apache Storm You can control logical access to Kinesis resources and management functions by creating users under your AWS Account using AWS IAM and controlling which Kinesis operations these users have permission to perform To facilitate running your producer or consumer applications on an Amazon EC2 instance you can configure that instance with an IAM role That way AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance which means you don’t have to use your long term AWS security credentials Roles ha ve the added benefit of providing temporary credentials that expire within a short timeframe which adds an additional measure of protection See the Using IAM Guide for more information about IAM roles The Amazon Kinesis API is only accessible via an SS Lencrypted endpoint (kinesisus east 1amazonawscom) to help ensure secure transmission of your data to AWS You must connect to that endpoint to access Kinesis but you can then use the API to direct AWS Kinesis to create a stream in any AWS Region AWS Data Pipeline Security The AWS Data Pipeline service helps you process and move data between different data sources at specified intervals using data driven workflows and built in dependency checking Archived Page 5 of 13 When you create a pipeline you define data sources p reconditions destinations processing steps and an operational schedule Once you define and activate a pipeline it will run automatically according to the schedule you specified With AWS Data Pipeline you don’t have to worry about checking resource availability managing inter task dependencies retrying transient failures/timeouts in individual tasks or creating a failure notification system AWS Data Pipeline takes care of launching the AWS services and resources your pipeline needs to process your data (eg Amazon EC2 or EMR) and transferring the results to storage (eg Amazon S3 RDS DynamoDB or EMR) When you use the console AWS Data Pipeline creates the necessary IAM roles and policies including a trusted entities list for you IAM ro les determine what your pipeline can access and the actions it can perform Additionally when your pipeline creates a resource such as an EC2 instance IAM roles determine the EC2 instance's permitted resources and actions When you create a pipeline you specify one IAM role that governs your pipeline and another IAM role to govern your pipeline's resources (referred to as a "resource role") which can be the same role for both As part of the security best practice of least privilege we recommend that you consider the minimum permissions necessary for your pipeline to perform work and define the IAM roles accordingly Like most AWS services AWS Data Pipeline also provides the option of secure (HTTPS) endpoints for access via SSL Deployment and Management Services Amazon Web Services provides a variety of tools to help with the deployment and management of your applications This includes services that allow you to create individual user accounts with credentials for access to AWS services It also in cludes services for creating and updating stacks of AWS resources deploying applications on those resources and monitoring the health of those AWS resources Other tools help you manage cryptographic keys using hardware security modules (HSMs) and log AWS API activity for security and compliance purposes AWS Identity and Access Management (AWS IAM) AWS IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Services AWS IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate AWS IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your Archived Page 6 of 13 organization can subscribe to the software and services offered in the Marketplace Since subsc ribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have fine grained control over usage and software costs AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website Roles An IAM role uses temporary security credentials to allow you to delegate access to users or services that normally don't have access to your AWS resources A role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives temporary security credentials for auth enticating to the resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after they expire This can be particularly useful in providing limited controlled access in certain situations: • Federated (non AWS) User Access Federated users are users (or applications) who do not have AWS Accounts With roles you can give them access to your AWS resources for a limited amount of time This is useful if you have non AWS users that you can authenticate with an external service such as Microsoft Active Directory LDAP or Kerberos The temporary AWS credentials used with the roles provide identity federation between AWS and you r non AWS users in your corporate identity and authorization system • If your organization supports SAML 20 (Security Assertion Markup Language 20) you can create trust between your organization as an identity provider (IdP) and other organizations as service providers In AWS you can configure AWS as the service provider and use SAML to provide your users with federated single sign on (SSO) to the AWS Management Console or to get federated access to call AWS APIs • Roles are also useful if you create a mobile or web based application that accesses AWS resources AWS resources require security credentials for programmatic requests; however you shouldn't embed long term security credentials in your application because they are accessible to the application's users and can be difficult to rotate Instead you can let users sign in to your application using Login with Amazon Facebook or Google and then use their authentication information to assume a role and get temporary security credentials • Cross Account Access For organizations who use multiple AWS Accounts to manage their resources you can set up roles to provide users who have permissions in one account to access resources under another account For organizations who have personnel who only rarel y need access to resources under another account using roles helps ensures that credentials are provided temporarily only as needed • Applications Running on EC2 Instances that Need to Access AWS Resources If an Archived Page 7 of 13 application runs on an Amazon EC2 instanc e and needs to make requests for AWS resources such as Amazon S3 buckets or a DynamoDB table it must have security credentials Using roles instead of creating individual IAM accounts for each application on each instance can save significant time for customers who manage a large number of instances or an elastically scaling fleet using AWS Auto Scaling The temporary credentials include a security token an Access Key ID and a Secret Access Key To give a user access to certain resources you distribute the temporary security credentials to the user you are granting temporary access to When the user makes calls to your resources the user passes in the token and Access Key ID and signs the request with the Secret Access Key The token will not work with different access keys How the user passes in the token depends on the API and version of the AWS product the user is making calls to More information about temporary security credentials is available on the AWS website The use of temporary credentials means additional protection for you because you don’t have to manage or distribute long term credentials to temporary users In addition the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe l ike your code Temporary credentials are automatically rotated or changed multiple times a day without any action on your part and are stored securely by default Amazon CloudWatch Security Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources starting with Amazon EC2 It provides customers with visibility into resource utilization operational performance and overall demand patterns including metrics such as CPU utilization disk reads and writes and network traffic You can set up CloudWatch alarms to notify you if certain thresholds are crossed or to take other automated actions such as adding or removing EC2 instances if Auto Scaling is enabled CloudWatch captures and summarizes utilization metrics natively for AWS resources but you can also have other logs sent to CloudWatch to monitor You can route your guest OS application and custom log files for the software installed on your EC2 instances to CloudWatch where they will be stored in durable fashion for as long as you'd like You can configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics You could for example monitor your web server's log files for 404 errors to detect bad inbound links or invalid user messages to detect unauthorized login attempts to your guest OS Like all AWS Services Amazon CloudWatch requires that every request made to its control API be authenticated so only authenticated users can access and manage CloudWatch Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudWatch control API is only accessible via SSL encrypted endpoints You can further control access to Amazon CloudWatch by creating users under your AWS Account using AWS IAM and controlling what CloudWatch operations these users have permission to call AWS CloudHSM Security The AWS CloudHSM service provides customers with dedicated access to a hardware security module (HSM) appliance designed to provide secure cryptographic key storage and operations Archived Page 8 of 13 within an intrusion resistant tamper evident device You can generate store and manage the cryptographic keys used for data encryption so that they are accessible only by you AWS CloudHSM appliances are designed to securely store and process cryptographic key material for a wide variety of uses such as database encryption Digital Rights Management (DRM) Public Key Infrastructure (PKI) authentication and authorization document signing and transaction processing They support some of the strongest cryptographic algorithms available including AES RSA and ECC and many others The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC providi ng the appliance with its own private IP within a private subnet You can connect to CloudHSM appliances from your EC2 servers through SSL/TLS which uses two way digital certificate authentication and 256 bit SSL encryption to provide a secure communicati on channel Selecting CloudHSM service in the same region as your EC2 instance decreases network latency which can improve your application performance You can configure a client on your EC2 instance that allows your applications to use the APIs provide d by the HSM including PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) Before you begin using an HSM you must set up at least one partition on the appliance A cryptographic partition is a logical and phy sical security boundary that restricts access to your keys so only you control your keys and the operations performed by the HSM AWS has administrative credentials to the appliance but these credentials can only be used to manage the appliance not the HSM partitions on the appliance AWS uses these credentials to monitor and maintain the health and availability of the appliance AWS cannot extract your keys nor can AWS cause the appliance to perform any cryptographic operation using your keys The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected The HSM is designed to detect tampering if the physical barrier of the HSM applianc e is breached In addition after three unsuccessful attempts to access an HSM partition with HSM Admin credentials the HSM appliance erases its HSM partitions When your CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed you must delete each partition and its contents as well as any logs As part of the decommissioning process AWS zeroizes the appliance permanently erasing all ke y material Mobile Services AWS mobile services make it easier for you to build ship run monitor optimize and scale cloud powered applications for mobile devices These services also help you authenticate users to your mobile application synchronize data and collect and analyze application usage Amazon Cognito Amazon Cognito provides identity and sync services for mobile and web based applications It simplifies the task of authenticating users and storing managing and syncing their data across multiple devices platforms and applications It provides temporary limited privilege credentials for both authenticated and unauthenticated users without having to manage any backend infrastructure Archived Page 9 of 13 Cognito works with well known identity providers like Google Facebook and Amazon to authenticate end users of your mobile and web applications You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own Your applicati on authenticates with one of these identity providers using the provider’s SDK Once the end user is authenticated with the provider an OAuth or OpenID Connect token returned from the provider is passed by your application to Cognito which returns a new Cognito ID for the user and a set of temporary limited privilege AWS credentials To begin using Amazon Cognito you create an identity pool through the Amazon Cognito console The identity pool is a store of user identity information that is specific to your AWS account During the creation of the identity pool you will be asked to create a new IAM role or pick an existing one for your end users An IAM role is a set of permissions to a ccess specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after they expire The role you select has an impact on which AWS services your end users will be able to access with the temporary credentials By default Amazon Cognito creates a new role with limited permissions – end users only have access to the Cognito Sync service and Amazon Mobile Analytics If your application needs acce ss to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console With Amazon Cognito there’s no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile app’s end users who will need to access your AWS resources In conjunction with IAM roles mobile users can securely access AWS resources and application features and even save data to the AWS cloud without having to create an account or log in However if th ey choose to do this later Cognito will merge data and identification information Because Amazon Cognito stores data locally as well as in the service your end users can continue to interact with their data even when they are offline Their offline dat a may be stale but anything they put into the dataset they can immediately retrieve whether they are online or not The client SDK manages a local SQLite store so that the application can work even when it is not connected The SQLite store functions as a cache and is the target of all read and write operations Cognito's sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed Note that in order to sync data across devices your identity pool must support authenticated identities Unauthenticated identities are tied to the device so unless an end user authenticates no data can be synced across multiple devices With Cognito your application communicates directly with a supported public id entity provider (Amazon Facebook or Google) to authenticate users Amazon Cognito does not receive or store user credentials— only the OAuth or OpenID Connect token received from the identity provider Once Cognito receives the token it returns a new Cognito ID for the user and a set of temporary limited privilege AWS credentials Each Cognito identity has access only to its own data in the sync store and this data is encrypted when stored In addition all identity data is transmitted over HTTPS The unique Archived Page 10 of 13 Amazon Cognito identifier on the device is stored in the appropriate secure location —on iOS for example the Cognito identifier is stored in the iOS keychain User data is cached in a local SQLite database within the application’s sandbox; if you require additional security you can encrypt this identity data in the local cache by implementing encryption in your application Amazon Mobile Analytics Amazon Mobile Analytics is a service for collecting visualizing and understanding mobile application usage data It enables you to track customer behaviors aggregate metrics and identify mean ingful patterns in your mobile applications Amazon Mobile Analytics automatically calculates and updates usage metrics as the data is received from client devices running your app and displays the data in the console You can integrate Amazon Mobile Analytics with your application without requiring users of your app to be authenticated with an identity provider (like Google Facebook or Amazon) For these unauthenticated users Mobile Analytics works with Amazon Cognito to provide temporary limited privilege credentials To do this you first create an identity pool in Cognito The identity pool will use IAM roles which is a set of perm issions not tied to a specific IAM user or group but which allows an entity to access specific AWS resources The entit y assumes a role and receives tempora ry security credentials f or authenticatin g to the AWS resources defined in the role By default Amazon Cog nito creates a new role with limited permissions – end users only have access to the Cog nito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amaz on S3 or Dynamo DB you can mod ify your roles direc tly from the IAM management console You can integrate the AWS Mobile SDK for Android or iOS into your application or use the Amazon Mobile Analytics REST API to send events from any connected device or service and visualize data in the reports The Amazon Mobile Analytics API is only accessible via an SSLencrypted endpoint Applications AWS applications are managed services that enable you to provide your users with secure centralized storage and work areas in the cloud Amazon WorkSpaces Amazon WorkSpaces is a managed desktop service that allows you to quickly provision cloudbased desktops for your users Simply choose a Windows 7 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch Once the WorkSpaces are ready users receive an email informing them where they can down load the relevant client and log into their WorkSpace They can then access their cloud based desktops from a variety of endpoint devices including PCs laptops and mobile devices However your organization’s data is never sent to or stored on the end user device because Amazon WorkSpaces uses PC over IP (PCoIP ) which provides an interactive video stream without transmitting actual data The PCoIP protocol compresses encrypts and encodes the users’ desktop computing experience and transmits ‘pixels only’ across any standard IP network to end user devices Archived Page 11 of 13 In order to access their WorkSpace users must sign in using a set of unique credentials or their regular Active Directory credentials When you integrate Amazon WorkSpaces with your corporate Active Directory each WorkSpace joins your Active Directory domain and can be managed just like any other desktop in your organization This means that you can use Active Directory Group Policies to manage your users’ WorkSpaces to specify configuration options that control the desktop If you choose not to use Active Directory or other type of on premises directory to manage your user WorkSpaces you can create a private cloud directory within Amaz on WorkSpaces that you can use for administration To provide an additional layer of security you can also require the use of multi factor authentication upon sign in in the form of a hardware or software token Amazon WorkSpaces supports MFA using an on premise Remote Authentication Dial In User Service (RADIUS) server or any security provider that supports RADIUS authentication It currently supports the PAP CHAP MS CHAP1 and MS CHAP2 protocols along with RADIUS proxies Each Workspace resides on i ts own EC2 instance within a VPC You can create WorkSpaces in a VPC you already own or have the WorkSpaces service create one for you automatically using the WorkSpaces Quick Start option When you use the Quick Start option WorkSpaces not only creates the VPC but it performs several other provisioning and configuration tasks for you such as creating an Internet Gateway for the VPC setting up a directory within the VPC that is used to store user and WorkSpace information creating a directory administr ator account creating the specified user accounts and adding them to the directory and creating the WorkSpace instances Or the VPC can be connected to an on premises network using a secure VPN connection to allow access to an existing on premises Active Directory and other intranet resources You can add a security group that you create in your Amazon VPC to all the WorkSpaces that belong to your Directory This allows you to control network access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC and on premises network Persistent storage for WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3 If WorkSpaces Sync is enabled on a WorkSpace the folder a user chooses to sync will be continuo usly backed up and stored in Amazon S3 You can also use WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that you can always have access to your data regardless of the desktop computer you are using Because it’s a managed service AWS takes care of several security and maintenance tasks like daily backups and patching Updates are delivered automatically to your WorkSpaces during a weekly maintenance window You can control how patching is configured for a user’s WorkSpace B y default Windows Update is turned on but you have the ability to customize these settings or use an alternative patch management approach if you desire For the underlying OS Windows Update is enabled by default on WorkSpaces and configured to instal l updates on a weekly basis You can use an alternative patching approach or to configure Windows Update to perform updates at a time of your choosing Archived Page 12 of 13 You can use IAM to control who on your team can perform administrative functions like creating or delet ing WorkSpaces or setting up user directories You can also set up a WorkSpace for directory administration install your favorite Active Directory administration tools and create organizational units and Group Policies in order to more easily apply Activ e Directory changes for all your WorkSpaces users Amazon WorkDocs Amazon WorkDocs is a managed enterprise storage and sharing service with feedback capabilities for user collaboration Users can store any type of file in a WorkDocs folder and allow others to view and download them Commenting and annotation capabilities work on certain file types such as MS Word and without requiring the application that was used to originally create the file WorkDocs notifies contributors about r eview activities and deadlines via email and performs versioning of files that you have synced using the WorkDocs Sync application User information is stored in an Active Directory compatible network directory You can either create a new directory in the cloud or connect Amazon WorkDocs to your on premises directory When you create a cloud directory using WorkDocs’ quick start setup it also creates a directory administrator account with the administrator email as the username An email is sent to your administrator with instructions to complete registration The administrator then uses this account to manage your directory When you create a cloud directory using WorkDocs’ quick start setup it also creates and configures a VPC for use with the direct ory If you need more control over the directory configuration you can choose the standard setup which allows you to specify your own directory domain name as well as one of your existing VPCs to use with the directory If you want to use one of your ex isting VPCs the VPC must have an Internet gateway and at least two subnets Each of the subnets must be in a different Availability Zone Using the Amazon WorkDocs Management Console administrators can view audit logs to track file and user activity by time IP address and device and choose whether to allow users to share files with others outside their organization Users can then control who can access individual files and disable downloads of files they share All data in transit is encrypted using industry standard SSL The WorkDocs web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL WorkDocs users can also utilize Multi Factor Authentication or MFA if their organization has deployed a Radius server MFA uses the following factors: username password and methods supported by the Radius server The protocols supported are PAP CHAP MS CHAPv1 and MS CHAPv2 You choose the AWS Region where each WorkDocs site’s files are stored Amazon WorkDocs is currently available in the US East (Virginia) US West (Oregon) and EU (Ireland) AWS Regions All files comments and annotations stored in WorkDocs are automatically encrypted with AES 256 encryption Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Archived Page 13 of 13 Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Application_Services
|
ArchivedOverview of AWS Security Application Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 9 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 9 Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email delivery search and transcoding Amazon CloudSearch Security Amazon Cloud Search is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections of data such as web pages document files forum posts or product infor mation It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuration that controls how your data is indexed and searched You create a se parate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index and how you want to use them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints Access to your search domain's endpoints is restricted by IP address so that only authorized hosts can submit documents and send search requests IP address authorization is used only to control access to the document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • You use the configuration service to create and manage your search domains The region specific configuration serv ice endpoints are of the form: cloudsearchregionamazonawscom For example cloudsearchus east 1amazonawscom For a current list of supported regions see Regions and Endpoints in the AWS General Reference The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east1cloudsearchamazonawscom • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http ://search domainname domain iduseast 1cloudsearchamazonawscom Note that if you do not have a static IP address you must re authorize your computer whenever your IP address changes If your IP address is assigned dynamically it is also likely that you're sharing that address with other computers on your network This means that when you authorize the IP address all computers that share it will be able to access your search domain's document service endpoint Archived Page 4 of 9 Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your Clou dSearch domain API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch management functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 14 days) Messages are highly durable; each message is persistently stored in highly available highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created wi th AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue i s restricted to the AWS Account that created it However you can allow other access to a queue using either an SQS generated policy or a policy you write Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the messag e when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Ser vice (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other applications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these Archived Page 5 of 9 topics publish messages and have these messages delivered over clients’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build highly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems time sensitive information updates mobile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can p ublish or subscribe to a topic Additionally topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that crea ted it However you can allow other access to SNS using either an SNS generated policy or a policy you write Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service (SWF) makes it easy to build applications that coordina te work across distributed components Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amaz on SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application ho sts interact You create desired workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a workflow —deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however only has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Archived Page 6 of 9 Amazon Simple Email Service (SES) is an outbound only emailsending service b uilt on Amazon’s reliable and scalable infrastructure Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from applications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a different sourc e To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record that Amazo n SES supplies as proof of control over the domain Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from being sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses cont entfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam Amazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for u nsolicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your b ehalf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS servi ces you use securit y credentia ls to verify who you are and whether you have perm ission to interact with Amazon SES For information about whic h credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specif y which Amazon SES API actions a user can perform If you choose to communicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to commu nicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its final Archived Page 7 of 9 destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic Transcoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Eac h job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is made to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs By Status function to find all of the jobs with a given status (eg "Completed") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated processes or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Transcoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against Archived Page 8 of 9 users acciden tly deleting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream Security The Amazon AppStream service provides a framework for running streami ng applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client device This can be a pre existing application that you modify to work with Amazon AppStream or a new application that you design specifically to work with the service The Amazon AppStream SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet in near real time decode content on client devices and return user inpu t to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream deploys streaming applications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients usi ng the Amazon AppStream SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your application before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream utilizes an AWS CloudFormation template that automates the process of deploying a GPU EC2 instance that has the AppStream Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is up load your application to the server and run the command to launch it You can then use the Amazon AppStream Service Simulator tool to test your application in standalone mode before deploying it into production Amazon AppStream also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors ne twork Archived Page 9 of 9 conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and video as well as capturing input from your customers to be sent back to the a pplication running in AWS Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute S ervices Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Compute_Services
|
ArchivedOverview of AWS Security Compute Services June 2016 (Please c onsult http://awsamazoncom/security/ forthelatest versi onofthispaper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 8 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 8 AWS ServiceSpecific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service provides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloudbased computing services that include a wide selection of compute instances that can scale up and down automatically to meet the needs of your application or enterprise Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud (EC2) is a key component in Amazon’s Infrastructure as a Service (IaaS) providing resizable computing capacity using server instances in AWS’ data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and softwa re Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexibility in configuration that customers demand The Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 03 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit Archived Page 4 of 8 virtualization of the physical resources leads to a clear separation between guest and hypervisor resulting in additional security separation between the two Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor AWS is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance ’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device: Figure 3: Amazon EC2 Multiple Layers of Security Host Operating System : Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Archived Page 5 of 8 Guest Operating System : Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling passwordonly access to your guests and utilizing some form of multifactor authentication to gain access to your instances (or at a minimum certificatebased SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a peruser basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use commandline logging and use ‘sudo’ for privilege escalation You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance You also control the updating and patching of your guest OS including security updates AWSprovided Windows and Linuxbased AMIs are updated regularly with the latest patches so if you do not need to preserve data or customizations on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall : Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default denyall mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional threetiered web application The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the database servers would have port 3306 (MySQL) open only to the application server group All three groups would permit adm inistrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this expressive mechanism See diagram below: Archived Page 6 of 8 Figure 4: Amazon EC2 Securi ty Group Firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications Wellinformed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional perinstance filters with hostbased firewalls such as IPtables or the Windows Firewall and VPNs This can restrict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality AWS recommends always using SSLprotected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call Elastic Block Storage (Amazon EBS) Security: Amazon Elastic Block Storage (EBS) allows you to create storage volumes from 1 GB to 16 TB that can be mounted as devices by Archived Page 7 of 8 Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volumes or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted access to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for longterm data durability For customers who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a blocklevel view of an entire EBS volume Note that data that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do so on Amazon EBS Encrypti on of sensitive data is general ly a good securi ty practice and AWS pro vides the ability to encry pt EBS vo lumes and their snapshots with AES256 The encryption o ccurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instances and EBS storage In order to be able to do this efficiently and with low laten cy the EBS encryption feature is only available on EC2 's more powerful instance types (eg M 3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up Archived Page 8 of 8 seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role will be securely provisioned to the instance and will be made available to your application via the Amazon EC2 Instance Metadata Service The Metadata Service will make new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the instance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call Further Reading https://awsamazoncom/security/securityresources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Database_Services
|
ArchivedOverview of AWS Security Database Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 11 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 11 Database Services Amazon Web Services provides a number of database solutions for developers and businesses— from managed relational and NoSQL database services to in memory caching as a service and petabyte scale data warehouse service Amazon DynamoDB Security Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS so you don’t have to worry about hardware provisioning setup and configuration replication software patching or cluster scaling You can create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored while maintaining consistent fast performance All data items are stored on Solid Sta te Drives (SSDs) and are automatically replicated across multiple availability zones in a region to provide built in high availability and data durability You can set up automatic backups using a special template in AWS Data Pipeline that was created just for copying DynamoDB tables You can choose full or incremental backups to a table in the same region or a different region You can use the copy for disaster recovery (DR) in the event that an error in your code damages the original table or to federate DynamoDB data across regions to support a multi region application To control who can use the DynamoDB resources and API you set up permissions in AWS IAM In addition to controlling access at the resource level with IAM you can also control access at the database level —you can create database level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application These database level permissions are called fine grained access controls and you create them using an IAM policy that specifies under what circumstances a user or application can access a DynamoDB table The IAM policy can restrict access to individual items in a table access to the attributes in those items or both at the same time You can optionally use web identity federation to control access by application users who are authenticated by Login with Amazon Facebook or Google Web identity federation removes the need for creating individual IAM users; instead users can sign in to an identity provider and then obtain temporary security credentials from AWS Security Token Service (AWS STS) AWS STS returns temporary AWS credentials to the application Archived Page 4 of 11 and allows it to access the specific DynamoDB table In addition to requiring database and user permissions each request to the DynamoDB service must contain a valid HMAC SHA256 signature or the request is rejected The AWS SDKs automatically sign your requests; however if you want to write your own HTTP POST requests you must provi de the signature in the header of your request to Amazon DynamoDB To calculate the signature you must request temporary security credentials from the AWS Security Token Service Use the temporary security credentials to sign your requests to Amazon Dynam oDB Amazon DynamoDB is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 Amazon Relational Database Service (Amazon RDS) Security Amazon RDS allows you to quickly create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS manages the database instance on your behalf by performing backups handling failover and maintaining the data base software Currently Amazon RDS is available for Amazon Aurora MySQL PostgreSQL Oracle Microsoft SQL Server and MariaDB database engines Amazon RDS has multiple features that enhance reliability for critical production databases including DB security groups permissions SSL connections automated backups DB snapshots and multi AZ deployments DB instances can also be deployed in an Amazon VPC for additional network isolation Access Control When you first create a DB Instance within Amazon RDS you will create a master user account which is used only within the context of Amazon RDS to control access to your DB Instance(s) The master user account is a native database user account that allows you to log on to your DB Instance with all database privileges You can specify the master user name and password you want associated with each DB Instance when you create the DB Instance Once you have created your DB Instance you can connect to the database using the master user credentials Subsequ ently you can create additional user accounts so that you can restrict who can access your DB Instance Using AWS IAM you can further control access to your RDS DB instances AWS IAM enables you to control what RDS operations each individual AWS IAM user has permission to call Network Isolation Archived Page 5 of 11 For additional network access control you can run your DB Instances in an Amazon VPC Amazon VPC enables you to isolate your DB Instances by specifying the IP range you wish to use and connect to your existing IT infrastructure through industry standard encrypted IPsec VPN Running Amazon RDS in a VPC enables you to have a DB instance within a private subnet You can also set up a virtual private gateway that extends your corporate network into your VPC and al lows access to the RDS DB instance in that VPC Refer to the Amazon VPC User Guide for more details DB Instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet To use a bastion host you will need to set up a public subnet with an EC2 instance that acts as a SSH Bastion This public subnet mus t have an Internet gateway and routing rules that allow traffic to be directed via the SSH host which must then forward requests to the private IP address of your Amazon RDS DB instance DB Security Groups can be used to help secure DB Instances within a n Amazon VPC In addition network traffic entering and exiting each subnet can be allowed or denied via network ACLs All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on premises security infra structure including network firewalls and intrusion detection systems Encryption You can encrypt connections between your application and your DB Instance using SSL For MySQL and SQL Server RDS creates an SSL certificate and installs the certificate o n the DB instance when the instance is provisioned For MySQL you launch the mysql client using the ssl_ca parameter to reference the public key in order to encrypt connections For SQL Server download the public key and import the certificate into you r Windows operating system Oracle RDS uses Oracle native network encryption with a DB instance You simply add the native network encryption option to an option group and associate that option group with the DB instance Once an encrypted connection is es tablished data transferred between the DB Instance and your application will be encrypted during transfer You can also require your DB instance to only accept encrypted connections Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (S QL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterprise Edition) The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage If you require your MySQL data to be encrypted while “at rest” in the database your application must manage the encryption and decryption of data Note that SSL support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself While SSL offers security benefits be aware that SSL encryption is a compute intensive Archived Page 6 of 11 operation and will increase the latency of your database connection To learn more about how SSL works with MySQL you can refer directly to the MySQL documentation found here To learn how SSL works with SQL Server you can read more in the RDS User Guid e Automated Backups and DB Snapshots Amazon RDS provides two different methods for ba cking up and restoring your DB Instance(s): automated backups and database snapshots (DB Snapshots) Turned on by default the automated backup feature of Amazon RDS enables point intime recovery for your DB Instance Amazon RDS will back up your database and transaction logs and store both for a user specified retention period This allows you to restore your DB Instance to any second during your retention period up to the last 5 minutes Your automatic backup retention period can be configured to up to 35 days During the backup window storage I/O may be suspended while your data is being backed up This I/O suspension typically lasts a few minutes This I/O suspension is avoided with Multi AZ DB deployments since the backup is taken from the standby DB Snapshots are user initiated backups o f your DB Instance These full database backups are stored by Amazon RDS until you explicitly delete them You can copy DB snapshots of any size and move them between any of AWS’ public regions or copy the same snapshot to multiple regions simultaneously You can then create a new DB Instance from a DB Snapshot whenever you desire DB Instance Replication Amazon cloud computing resources are housed in highly available data center facilities in different regions of the world and each region contains multi ple distinct locations called Availability Zones Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive low latency network connectivity to other Availability Zones in the same region Amazon RDS provides high availability and failover support for DB instances using Multi AZ deployments Multi AZ deployments for Oracle PostgreSQL MySQL and MariaDB DB instances use Amazon technology while SQL Server DB instances use SQL Server Mirrorin g Note that Amazon Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single region regardless of whether the instances in the DB cluster span multiple Availability Zones In a Multi AZ deployment Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy eliminate I/O freezes and minimi ze latency spikes during system backups In the event of DB instance or Availability Zone failure Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against DB instance failure and Availability Zone disruption Amazon RDS also uses the PostgreSQL MySQL and MariaDB DB engines' built in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance Updates made to the source DB instance are asynchronously copied to the Read Replica You can reduce the load on your source DB instance by routing read queries from your Archived Page 7 of 11 applications to the Read Replica Read Replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read heavy database workloads" Automatic Software Patching Amazon RDS will make sure that the relational database software powering your deployment stays up todate with the latest patches When necessary patches are applied during a maintenance window that you can control You can think of the Amazon RDS maintenance window as an op portunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching occur in the event either are requested or required If a “maintenance” event is scheduled for a given week it will be initiated and completed at some point during the 30 minute maintenance window you identify The only maintenance events that require Amazon RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start tofinish) or required software patching Required patching is automatically scheduled only for patches that are security and durability related Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window If you do not specify a preferred weekly maintenance window when creating your DB Instance a 30 minute default value is assigned If you wish to modify when maintenance is performed on your behalf you can do so by modifying your DB Instance in the AWS Manag ement Conso le or by using the ModifyDBInstance API Each of your DB Instances can have different preferred maintenance windows if you so choose Running your DB Instance as a Multi AZ deployment can further reduce the impact of a maintenance event as Amazon RDS will conduct maintenance via the following steps: 1) Perform maintenance on standby 2) Promote standby to primary and 3) Perform maintenance on old primary which becomes the new standby When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run the DB Instance is marked for deletion Once the instance no longer indicates ‘deleting’ status it has been removed At this point the instance is no longer accessible an d unless a final snapshot copy was asked for it cannot be restored and will not be listed by any of the tools or APIs Event Notification You can receive notifications of a variety of important events that can occur on your RDS instance such as whether the instance was shut down a backup was started a failover occurred the security group was changed or your storage space is low The Ama zon RDS service groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs You can subscribe to an event category for a DB instance DB snapshot DB security group or for a DB parameter group RDS events are published via AWS SNS and sent to you as an email or text message For more information about RDS notification event categories refer to the RDS User Guid e Amazon Redshift Security Amazon Redshift is a petabytescale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources The service has been architected to not only scale up or down rapidly but to significantly improve query speeds Archived Page 8 of 11 even on extremely large datasets To increase performance Redshift uses techniques such as columnar storage data compression and zone maps to reduce the amount of IO needed to perform queries It also has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources When you create a Redshift data warehouse you provision a single node or multi node cluster specifying the type and number of nodes that will make up the cluster The node type determines the storage size memory and CPU of each node Each multi node cluster includes a leader node and two or more compute nodes A leader node manages connections parses queries builds execution plans and manages query execution in the compute nodes The compute nodes store data perform computations and run queries as directed by the leader node The leader node of each cluster is accessible through ODBC and JDBC endpoints using standard PostgreSQL drivers The compute nodes run on a separate isolated network and are never accessed directly After you provision a cluster you can upload your dataset and perform data analysis queries by using common SQL based tools and business intelligence applications Cluster Access By de fault clusters that you create are closed to everyone Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster You can also run Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry standard encrypted IPsec VPN The AWS account that creates the cluster has full access to the cluster Within your AWS account you can use AWS IAM to create user accounts and manage permissions for those accounts By using IAM you can grant different users permission to perform only the cluster operations that are necessary for their work Like all databases you must grant permission in Redshift at the database level in addition to granting access at the resource level Database users are named user accounts that can connect to a database and are authenticated when they login to Amazon Redshift In Redshift you grant database user permissions on a per cluster basis instead of on a per table basis However a user can see data only in the table rows that were generated by his own activities; rows generated by other users are not visible to him The user who creates a database object is its owner By default only a superuser or the owner of an object can query modify or grant permissions on the object For users to use an object you must grant the necessary permissions to the user or the group that contains the user And only the owner of an object can modify or delete it Data Backups Amazon Redshift distributes your data across all compute nodes in a cluster When you run a cluster with at least two compute nodes data on each node will always be mirrored on disks Archived Page 9 of 11 on another node reducing the risk of data loss In addition all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots Redshift stores your snapshots for a user defined period which can be from one to thirty five days You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them Amazon Redshift continuously monitors the health of the cluster and automatically re replicates data from failed drives and replaces nodes as necessary All of this happens without any effort on your part although you may see a slight performance degradation during the rereplication process You can use any system or user snapshot to restore your cluster using the AWS M anagement Console or the Amazon Redshift APIs Your cluster is available as soon as the system metadata has been restored and you can start running queries while user data is spooled down in the background Data Encryption When creating a cluster you can choose to encrypt it in order to provide additional protection for your data at rest When you enable encryption in your cluster Amazon Redshift stores all data in user created tables in an encrypted format using hardware accelerated AES 256 block encryption keys This includes all data written to disk as well as any backups Amazon Redshift uses a four tier key based architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key: • Data encryptio n keys encrypt data blocks in the cluster Each data block is assigned a randomly generated AES 256 key These keys are encrypted by using the database key for the cluster • The database key encrypts data encryption keys in the cluster The database key is a randomly generated AES 256 key It is stored on disk in a separate network from the Amazon Redshift cluster and encrypted by a master key Amazon Redshift passes the database key across a secure channel and keeps it in memory in the cluster • The cluste r key encrypts the database key for the Amazon Redshift cluster You can use either AWS or a hardware security module (HSM) to store the cluster key HSMs provide direct control of key generation and management and make key management separate and distinct from the application and the database • The master key encrypts the cluster key if it is stored in AWS The master key encrypts the cluster keyencrypted database key if the cluster key is stored in an HSM You can have Redshift rotate the encryption keys for your encrypted clusters at any time As part of the rotation process keys are also updated for all of the cluster's automatic and manual snapshots Note that enabling encryption in your cluster will impact performance even though it is hardware ac celerated Encryption also applies to backups When restoring from an encrypted snapshot the new cluster will be encrypted as well To encrypt your table load data files when you upload them to Amazon S3 you can use Amazon Archived Page 10 of 11 S3 server side encryption Whe n you load the data from Amazon S3 the COPY command will decrypt the data as it loads the table Database Audit Logging Amazon Redshift logs all SQL operations including connection attempts queries and changes to your database You can access these logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket You can then use these audit logs to monitor your cluster for security and troubleshooting purposes Automatic Software Patching Amazon Redshift manages all the work of setting up operating and scaling your data warehouse including provisioning capacity monitoring the cluster and applying patches and upgrades to the Amazon Redshift engine Patches are applied only during specified maintenance windows SSL Connections To protect your data in transit within the AWS cloud Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY UNLOAD backup and restore operations You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster To have your clients also authenticate the Redshift server you can install the public key (pem file) for the SSL certificate on your client and use the key to connect to your clusters Amazon Redshift offers the newer stronger cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral protocol ECDHE allows SSL clients to provide Perfect Forward Secrecy between the client and the Redshift cluster Perfect Forward Secrecy uses session keys that are ephemeral and not stored anywhere which prevents the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised You do not need to configure anything in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server Amazon Redshift will use the provided cipher list to make the appropriate connection Amazon ElastiCache Security Amazon ElastiCache is a web service that makes it easy to set up manage and scale distributed inmemory cache environments in the cloud The service improves th e performance of web applications by allowing you to retrieve information from a fast managed in memory caching system instead of relying entirely on slower disk based databases It can be used to significantly improve latency and throughput for many re adheavy application workloads (such as social networking gaming media sharing and Q&A portals) or compute intensive workloads (such as a recommendation engine) Caching improves application performance by storing critical pieces of data in memory for l owlatency access Cached information may include the results of I/O intensive database queries or the results of computationally intensive calculations The Amazon ElastiCache service automates time consuming management tasks for inmemory cache environm ents such as patch management failure detection and recovery It works in conjunction with other Amazon Web Services (such as Amazon EC2 Amazon CloudWatch and Amazon SNS) to provide a secure high performance and managed in memory cache For example an application running in Amazon EC2 can securely access an Amazon ElastiCache Archived Page 11 of 11 Cluster in the same region with very low latency Using the Amazon ElastiCache service you create a Cache Cluster which is a collection of one or more Cache Nodes A Cache N ode is a fixed size chunk of secure network attached RAM Each Cache Node runs an instance of the Memcached or Redis protocol compliant service and has its own DNS name and port Multiple types of Cache Nodes are supported each with varying amounts of a ssociated memory A Cache Cluster can be set up with a specific number of Cache Nodes and a Cache Parameter Group that controls the properties for each Cache Node All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and have the same parameter and security group settings Amazon ElastiCach e allows you to contro l access to your Cache Clusters usin g Cache Security Groups A Cache Security Group acts like a firewall controlling network access to your Cache Cluster By default network access is turned off to your Cache Clusters If you want your applications to access your Cache Cluster you must explicitly enable access from hosts in specific EC2 security groups Once ingress rules are configured the same rules apply to all Cache Clusters associated with that Cache Security Group To allow network access to your Cache Cluster create a Cache Security Group and link the desired EC2 security groups (which in turn specify the EC2 instances allowed) to it The Cache Security Group can be associated with your Cache Cluster at the time of creation or using the "Modify" option on the AWS Management Console IP range based access control is currently not enabled for Cache Clusters All clients to a Cache Cluster must be within the EC2 network and authorized via Cache Security Groups ElastiCache for Redis provides backup and restore functionality where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time You can schedule automatic recurring daily snapshots or you can create a manual snapshot at any time For automatic snapshots you specify a retention period; manual snapshots are retained until you delete them The snapshots are stored in Amazon S3 with high durability and can be used fo r warm starts backups and archiving Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Network_Services
|
Archived Overview of AWS Security Network Security August 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 1 of 7 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 2 of 7 Network Security The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographically dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed Secure Network Architecture Network devices including firewall and other boundary devices are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system ser vices ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are automatically pushed using AWS’ ACL Manage tool to help ensure these managed interfaces enforce the most up todate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS To support customers with FIPS cryptographic requirements the SSL terminating load balancers in AWS GovCloud (US) are FIPS 140 2compliant In addition AWS has implemented network devices that are dedicated to managing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmission Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional layers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center For more Archived Page 3 of 7 information about VPC configuration o ptions refer to the Amazon Virtual Private Cloud (Amazon VPC) Security section below Amazon Corporate Segregation Logically the AWS Production network is segregated from the Amazon Corporate network by means of a complex set of network security / segregation devices AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly request access through the AWS ticketing system All requests are reviewed and approved by the appli cable service owner Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud components logging all activity for security review Access to bastion hosts require SSH public key authentication for all user accounts on the host For more information on AWS developer and administrator logical access see AWS Access below Fault Tolerant Design AWS’ infrastructure has a high level of availability and provides you with the capabilit y to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by region) In addition to utili zing discrete uninterruptable power supply (UPS) and onsite backup generators they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure scenarios including natural disasters or system failures However you should be aware of location dependent privacy and compliance requirements such as the EU Data Privacy Directive Data is not replicated between regions unless proactively done so by the customer thus allowing customers with these types of data placement and privacy requirements the ability to establish compliant environments It should be noted that all Archived Page 4 of 7 communications between regions is across public Internet infrastructure; therefore appropr iate encryption methods should be used to protect sensitive data As of this writing there are thirteen regions: US East (Northern Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) EU (Ireland) EU (Frankfurt) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) Asia Pacific (Seoul) Asia Pacific (Mumbai) South America ( São Paulo) and China (Beijing) AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move workloads into the cloud by helping them meet certain regulatory and compliance requirements The AWS GovCloud (US) framework allows US government agencies and their contractors to comply with US International Traffic in Arms Regulations (ITAR) regulations as well as the Federal Risk and Authorization Management Program (FedRAMP) requirements AWS GovCloud (US) has received an Agency Authorization to Op erate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredited Third Party Assessment Organization (3PAO) for several AWS services The AWS GovCloud (US) Region provides the same fault tolerant design as other regions with two Availability Zones In addition the AWS GovCloud (US) region is a mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses More information about GovCloud is available on the AWS website: http://awsamazoncom/govcloud us/ Figure 2: Regions and Availability Zon es Note that the number of Availabili ty Zones may chang e Archived Page 5 of 7 Network Monitoring and Protection AWS utilizes a wide variety of automated monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at ingress and egress communication points These tools monitor server and network usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call schedule is used so personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in h andling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operat ional issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in th e future Implementation of the preventative measures is tracked during weekly operations meetings AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks The AWS network provides significant protection against traditional network security issues and you can implement further protection The following are a few examples: • Distributed Denial Of Service (DDoS) Attacks AWS API endpoints are hosted on large Internet scale world class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer Proprietary DDoS mitigation techniques are used Additionally AWS’ networks are multi homed across a number of providers to achieve Internet access diversity • Man in the Middle (MITM) Attacks All of the AWS APIs are available via SSL protected endpoints which provide server authentication Amazon EC2 AMIs automatically generate new SSH host certificates on first boot and log them to the instance’s console You can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time We Archived Page 6 of 7 encourage you to use SSL for all of your interactions with AWS • IP Spoofing Amazon EC2 instances cannot send spoofed network traffic The AWS controlled host based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own • Port Scanning Unauthorize d port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy Violations of the AWS Acceptable Use Policy are taken seriously and every reported violation is investigated Customers can report suspected abuse via the contacts available on our website at: http://awsamazoncom/contact us/report abuse/ When unauthorized port scanning is detected by AWS it is stopped and blocked Port scans of Amazon EC2 instances are generally ineffective because by default all inbound ports on Amazon EC2 instances are closed and are only opened by you Your strict management of security groups can further mitigate the threat of port scans If you configure the security group to allow traffic from any source to a specific port then that specific port will be vulnerable to a port scan In these cases you must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan For example a web server must clearly have port 80 (HTTP) open to the world and the administrator of this server is responsible for the security of the HTTP server software such as Apache You may request permission to conduct vulnerability scans as required to meet your specific comp liance requirements These scans must be limited to your own instances and must not violate the AWS Acceptable Use Policy • Packet sniffing by other tenants It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traff ic that is intended for a different virtual instance While you can place your interfaces into promiscuous mode the hypervisor will not deliver any traffic to them that is not addressed to them Even two virtual instances that are owned by the same custom er located on the same physical host cannot listen to each other’s traffic Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously a ttempting to view another’s data as a standard practice you should encrypt sensitive traffic In addition to monitoring regular vulnerability scans are performed on the host operating system web application and databases in the AWS environment using a variety of tools Also AWS Security teams subscribe to newsfeeds for applicable vendor flaws and proactively monitor vendors’ websites and other relevant outlets for new patches AWS customers also have the ability to report issues to AWS via the AWS Vul nerability Reporting website at: http ://awsamaz onco m/se curity /vulnerab ilityreportin g/ Archived Page 7 of 7 Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Servi ces Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Appli cation Services Overview of AWS Security – Network Services
|
General
|
consultant
|
Best Practices
|
Overview_of_AWS_Security__Storage_Services
|
ArchivedOverview of AWS Security Storage Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 9 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS ’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 9 Storage Services Amazon Web Services provides low cost data storage with high durability and availability AWS offers storage choices for backup archiving and disaster recovery as well as block and object storage Amazon Simple Storage Service (Amazon S3) Security Amazon Simple Storage Service (S3) allows you to upload and retrieve data at any time from anywhere on the web Amazon S3 stores data as objects within buckets An object can be any kind of file: a text file a photo a video etc When you add a file to Amazon S3 you have the option of including metadata with the file and setting permissions to control access to the file For each bucket you can control access to the bucket (who can create delete and list objects in the bucket) view access logs for the bucket and its objects and choose the geographical region where Amazon S3 will store the bucket and its contents Data Access Access to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources they create (note that a bucket/object owner is the AWS Account owner not the user who created the bucket/object) There are multiple ways to control access to buckets and objects: • Identity and Access Management (IAM) Policies AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS Account IAM policies are attached to the users enabling centralized control of permissions for users under your AWS Account to access buckets or objects With IAM policies you can onl y grant users within your own AWS account permission to access your Amazon S3 resources • Access Control Lists (ACLs) Within Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users With ACLs you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources • Bucket Policies Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket Policies can be attached to users grou ps or Amazon S3 buckets enabling centralized management of permissions With bucket policies you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources You can further restrict access to specific resources based on certain conditions For Type of Access Control AWS Account Level Control? User Level Control? IAM Policies No Yes ACLs Yes No Bucket Policies Yes Yes Archived Page 4 of 9 example you can restrict access based on request time (Date Condition) whether the request was sent using SSL (Boolean Conditions) a requester’s IP address (IP Address Condition) or based on the requester's client application (String Conditions) To identify these conditions you use policy keys For more information about action specific policy keys available within Amazon S3 refer to the Amazon Simple Storage Service Developer Guide Amazon S3 also gives developers the option to use query string authentication which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time Query string authentication is useful for giving HTTP or browser access to resources that would normally require authentication The signature in the query string secures the request Data Transfer For maximum security you can securely upload/download data to Amazon S3 via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Storage Amazon S3 provides multiple options for protecting data at rest For customers who prefer to manage their own encryption they can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 Alternatively you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you Data is encrypted with a key generated by AWS or with a key you supply depending on your requirements With Amazon S3 SSE you can encrypt data on upload simply by adding an additional request header when writing the object Decryption happens automatically when data is retrieved Note that metadata which you can include with your object is not encrypted Therefore AWS recommends that customers not place sensitive information in Amazon S3 metadata Amazon S3 SSE uses one of the strongest block ciphers available – 256bit Advanced Encr yption Standard (AES 256) With Amazon S3 SSE every protected object is encrypted with a unique encryption key This object key itself is then encrypted with a regularly rotated master key Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts Amazon S3 SSE also makes it possible for you to enforce encryption requirements For example you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets For long term storage you can automatically archive the contents of your Amazon S3 buckets to AWS’ archival service called Amazon Glacier You can have data transferred at specific intervals to Glacier by creating lifecycle rules in Amazon S3 that describe which objects you want to be archived to Glacier and when As part of your data management strategy you can also specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them When an object is deleted from Amazon S3 removal of the mapping from the public name Archived Page 5 of 9 to the object starts immediately and is generally processed across the distributed system within several seconds Once the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Durability and Reliability Amazon S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Objects are redundantly stored on multiple devices across multi ple facilities in an Amazon S3 region To help provide durability Amazon S3 PUT and COPY operations synchronously store customer data across multiple facilities before returning SUCCESS Once stored Amazon S3 helps maintain the durability of the objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired using redundant data In addition Amazon S3 calculates checksums on all netwo rk traffic to detect corruption of data packets when storing or retrieving data Amazon S3 provides further protection via Versioning You can use Versioning to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket W ith Versioning you can easily recover from both unintended user actions and application failures By default requests will retrieve the most recently written version Older versions of an object can be retrieved by specifying a version in the request You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Access Logs An Amazon S3 bucket can be configured to log access to the bucket and objects within it The access log contains details about each access request including request type the requested resource the requestor’s IP and the time and date of the request Wh en logging is enabled for a bucket log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket Cross Origin Resource Sharing (CORS) AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross origin requests Modern browsers use the Same Origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross site scripting attacks) With the Cross Origin Resource Sharing (CORS) policy enabled assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages style sheets and HTML5 applications Amazon Glacier Security Like Amazon S3 the Amazon Glacier service provides low cost secure and durable storage But where Amazon S3 is designed for rapid retrieval Amazon Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable Archived Page 6 of 9 Amazon Glacier stores files as archives within vaults Archives can be any data such as a photo video or document and can contain one or several files You can store an unlimited number of archives in a single vault and can create up to 1000 vaults per region Each archive can contain up to 40 TB of data Data Upload To transfer data into Amazon Glacier vaults you can upload an archive in a single upload operation or a multipart operation In a single upload operation you can upload archives up to 4 GB in size However customers can achieve better results using the Multipart Upload API to upload archives greater than 100 MB Using the Multipart Upload API allows you to upload large archives up to about 40 TB The Multipart Upload API call is designed to improve the upload experience for larger archives; it enables the parts to be uploaded independently in any order and in parallel If a multipart upload fails you only need to upload the failed part again and not the entire archive When you upload data to Amazon Glacier you must compute and supply a tree hash Amazon Glacier checks the hash against the data to help ensure that it has not been altered en route A tree hash is generated by computing a hash for each megabyte sized segment of the data and then combining the hashes in tree fashion to represent everg rowing adjacent segments of the data As an alternate to using the Multipart Upload feature customers with very large uploads to Amazon Glacier may consider using the AWS Import/Export service instead to transfer the data AWS Import/Export facilitates m oving large amounts of data into AWS using portable storage devices for transport AWS transfers your data directly off of storage devices using Amazon’s high speed internal network bypassing the Internet You can also set up Amazon S3 to transfer data at specific intervals to Amazon Glacier You can create lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon Glacier and when You can also specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them To achieve even greater security you can securely upload/download data to Amazon Glacier via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Retrieval Retrieving archives from Amazon Glacier requires the initiation of a retrieval job which is generally completed in 3 to 5 hours You can then access the data via HTTP GET requests The data will remain available to you for 24 hours You can retrieve an entire archive or several files from an archive If you want to retrieve only a subset of an archive you can use one retrieval request to specify the range of the archive t hat contains the files you are interested or you can initiate multiple retrieval requests each with a range for one or more files You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by settin g a maximum items limit Whichever method you choose when you retrieve portions of your Archived Page 7 of 9 archive you can use the supplied checksum to help ensure the integrity of the files provided that the range that is retrieved is aligned with the tree hash of the ove rall archive Data Storage Amazon Glacier automatically encrypts the data using AES 256 and stores it durably in an immutable form Amazon Glacier is designed to provide average annual durability of 99999999999% for an archive It stores each archive in multiple facilities and multiple devices Unlike traditional systems which can require laborious data verification and manual repair Amazon Glacier performs regular systematic data integrity checks and is built to be automatically self healing Data Access Only your account can access your data in Amazon Glacier To control access to your data in Amazon Glacier you can use AWS IAM to specify which users within your account have rights to operations on a given vault AWS Storage Gateway Security The AWS Storage Gateway service connects your on premises software appliance with cloud based storage to provide seamless and secure integration between your IT environment and AWS’ storage infrastructure The service enables you to securely upload data to AWS’ scalable reliable and secure Amazon S3 storage service for cost effective backup and rapid disaster recovery AWS Storage Gateway transparently backs up data off site to Amazon S3 in the form of Amazon EBS snapshots Amazon S3 redundantly stores these sn apshots on multiple devices across multiple facilities detecting and repairing any lost redundancy The Amazon EBS snapshot provides a point intime backup that can be restored on premises or used to instantiate new Amazon EBS volumes Data is stored within a single region that you specify AWS Storage Gateway offers three options: • Gateway Stored Volumes (where the cloud is backup) In this option your volume data is stored locally and then pushed to Amazon S3 where it is stored in redundant encrypted form and made available in the form of Elastic Block Storage (EBS) snapshots When you use this model the on premises storage is primary delivering low latency access to your entire dataset and the cloud storage is the backup • Gateway Cached Volumes ( where the cloud is primary) In this option your volume data is stored encrypted in Amazon S3 visible within your enterprise's network via an iSCSI interface Recently accessed data is cached on premises for low latency local access When you use this model the cloud storage is primary but you get low latency access to your active working set in the cached volumes on premises • Gateway Virtual Tape Library (VTL) In this option you can configure a Gateway VTL with up to 10 virtual tape drives per gate way 1 media changer and up to 1500 virtual tape cartridges Each virtual tape drive responds to the SCSI command set so your existing on premises backup applications (either disk totape or disk todisk to tape) will work without modification No matte r which option you choose data is asynchronously transferred from your on premises Archived Page 8 of 9 storage hardware to AWS over SSL The data is stored encrypted in Amazon S3 using Advanced Encryption Standard (AES) 256 a symmetric key encryption standard using 256 bit encryption keys The AWS Storage Gateway only uploads data that has changed minimizing the amount of data sent over the Internet The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in your data center running VMware ESXi Hy pervisor v 41 or v 5 or Microsoft Hyper V (you download the VMware software during the setup process) You can also run within EC2 using a gateway AMI During the installation and configuration process you can create up to 12 stored volumes 20 Cached vo lumes or 1500 virtual tape cartridges per gateway Once installed each gateway will automatically download install and deploy updates and patches This activity takes place during a maintenance window that you can set on a per gateway basis The iSCSI protocol supports authentication between targets and initiators via CHAP (Challenge Handshake Authentication Protocol) CHAP provides protection against man inthemiddle and playback attacks by periodically verifying the identity of an iSCSI initiator as authenticated to access a storage volume target To set up CHAP you must configure it in both the AWS Storage Gateway console and in the iSCSI initiator software you use to connect to the target After you deploy the AWS Storage Gateway VM you must activate the gateway using the AWS Storage Gateway console The activation process associates your gateway with your AWS Account Once you establish this connection you can manage almost all aspects of your gateway from the console In the activation process you specify the IP address of your gateway name your gateway identify the AWS region in which you want your snapshot backups stored and specify the gateway time zone AWS Import/Export Security AWS Import/Export is a simple secure method for physically transferring large amounts of data to Amazon S3 EBS or Amazon Glacier storage This service is typically used by customers who have over 100 GB of data and/or slow connection speeds that would r esult in very slow transfer rates over the Internet With AWS Import/Export you prepare a portable storage device that you ship to a secure AWS facility AWS transfers the data directly off of the storage device using Amazon’s high speed internal network thus bypassing the Internet Conversely data can also be exported from AWS to a portable storage device Like all other AWS services the AWS Import/Export service requires that you securely identify and authenticate your storage device In this case you will submit a job request to AWS that includes your Amazon S3 bucket Amazon EBS region AWS Access Key ID and return shipping address You then receive a unique identifier for the job a digital signature for authenticating your device and an AWS add ress to ship the storage device to For Amazon S3 you place the signature file on the root directory of your device For Amazon EBS you tape the signature barcode to the exterior of the device The signature file is used only for authentication and is no t uploaded to Amazon S3 or EBS For transfers to Amazon S3 you specify the specific buckets to which the data should be loaded and ensure that the account doing the loading has write permission for the buckets You should also specify the access control list to be applied to each object loaded to Amazon S3 For transfers to EBS you specify the target region for the EBS import operation If the storage device is less than or equal to the maximum volume size of 1 TB its contents are loaded directly into an Amazon EBS snapshot If the storage device’s capacity exceeds 1 TB a device image is Archived Page 9 of 9 stored within the specified S3 log bucket You can then create a RAID of Amazon EBS volumes using software such as Logical Volume Manager and copy the image from S3 to this new volume For added protection you can encrypt the data on your device before you ship it to AWS For Amazon S3 data you can use a PIN code device with hardware encryption or TrueCrypt software to encrypt your data before sending it to AWS For EBS and Amazon Glacier data you can use any encryption method you choose including a PIN code device AWS will decrypt your Amazon S3 data before importing using the PIN code and/or TrueCrypt password you supply in your import manifest AWS uses your PIN to access a PIN code device but does not decrypt software encrypted data for import to Amazon EBS or Amazon Glacier AWS Import/Export Snowball uses appliances designed for security and the Snowball client to accelerate petabyte scale data transfers into and out of AWS You start by using the AWS Management Console to create one or more jobs to request one or multiple Snowball appliances (depending on how much data you need to transfer) and download and install the Snowball client Once the appliance arrives connect it to your local network set the IP address either manually or with DHCP and use the client to identify the directories you want to copy The client will automatically encrypt and copy the data to the appliance and notify you when the transfer job is complete After the import is complete AWS Import/Export will erase the contents of your storage device to safeguard the data during return shipment AWS overwrites all writable blocks on the storage device with zeroes If AWS is unable to erase the data on the device it will be scheduled for destruction and our support team will contact you using the email address specified in the manifest file you ship with the device When shipping a device internationally the customs option and certain required subfields are required in the manifest file sent to AWS AWS Import/Export uses these values to validate the inbound shipment and prepare the outbound customs paperwork Two of these options are whether the data on the device is encrypted or not and the encryption software’s classification When shipping encrypted data to or from the United States the encryption software must be classified as 5D992 under the United States Export Administration Regulations Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Service s
|
General
|
consultant
|
Best Practices
|
Overview_of_Deployment_Options_on_AWS
|
Overview of Deployment Options on AWS AWS Whitepaper Overview of Deployment Options on AWS AWS Whitepaper Overview of Deployment Options on AWS: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonOverview of Deployment Options on AWS AWS Whitepaper Table of Contents Abstract1 Abstract1 Introduction2 AWS Deployment Services3 AWS CloudFormation3 AWS Elastic Beanstalk5 AWS CodeDeploy7 Amazon Elastic Container Service9 Amazon Elastic Kubernetes Service10 AWS OpsWorks12 Additional Deployment Services14 Deployment Strategies15 Prebaking vs Bootstrapping AMIs15 Blue/Green Deployments15 Rolling Deployments15 InPlace Deployments16 Combining Deployment Services16 Conclusion 17 Contributors 18 Further Reading19 Document Revisions20 Notices21 iiiOverview of Deployment Options on AWS AWS Whitepaper Abstract Overview of Deployment Options on AWS Publication date: June 3 2020 (Document Revisions (p 20)) Abstract Amazon Web Services (AWS) offers multiple options for provisioning infrastructure and deploying your applications Whether your application architecture is a simple threetier web application or a complex set of workloads AWS offers deployment services to meet the requirements of your application and your organization This whitepaper is intended for those individuals looking for an overview of the different deployment services offered by AWS It lays out common features available in these deployment services and articulates basic strategies for deploying and updating application stacks 1Overview of Deployment Options on AWS AWS Whitepaper Introduction Designing a deployment solution for your application is a critical part of building a wellarchitected application on AWS Based on the nature of your application and the underlying services (compute storage database etc) that it requires you can use AWS services to create a flexible deployment solution that can be tailored to fit the needs of both your application and your organization The constantly growing catalog of AWS services not only complicates the process of deciding which services will compose your application architecture but also the process of deciding how you will create manage and update your application When designing a deployment solution on AWS you should consider how your solution will address the following capabilities: •Provision: create the raw infrastructure (Amazon EC2 Amazon Virtual Private Cloud [Amazon VPC] subnets etc) or managed service infrastructure (Amazon Simple Storage Service (Amazon S3) Amazon Relational Database Service [Amazon RDS] Amazon CloudFront etc) required for your application •Configure : customize your infrastructure based on environment runtime security availability performance network or other application requirements •Deploy: install or update your application component(s) onto infrastructure resources and manage the transition from a previous application version to a new application version •Scale: proactively or reactively adjust the amount of resources available to your application based on a set of userdefined criteria •Monitor : provide visibility into the resources that are launched as part of your application architecture Track resources usage deployment success/failure application health application logs configuration drift and more This whitepaper highlights the deployment services offered by AWS and outlines strategies for designing a successful deployment architecture for any type of application 2Overview of Deployment Options on AWS AWS Whitepaper AWS CloudFormation AWS Deployment Services The task of designing a scalable efficient and costeffective deployment solution should not be limited to the issue of how you will update your application version but should also consider how you will manage supporting infrastructure throughout the complete application lifecycle Resource provisioning configuration management application deployment software updates monitoring access control and other concerns are all important factors to consider when designing a deployment solution AWS provides a number of services that provide management capabilities for one or more aspects of your application lifecycle Depending on your desired balance of control (ie manual management of resources) versus convenience (ie AWS management of resources) and the type of application these services can be used on their own or combined to create a featurerich deployment solution This section will provide an overview of the AWS services that can be used to enable organizations to more rapidly and reliably build and deliver applications AWS CloudFormation AWS CloudFormation is a service that enables customers to provision and manage almost any AWS resource using a custom template language expressed in YAML or JSON A CloudFormation template creates infrastructure resources in a group called a “stack” and allows you to define and customize all components needed to operate your application while retaining full control of these resources Using templates introduces the ability to implement version control on your infrastructure and the ability to quickly and reliably replicate your infrastructure CloudFormation offers granular control over the provisioning and management of all application infrastructure components from lowlevel components such as route tables or subnet configurations to highlevel components such as CloudFront distributions CloudFormation is commonly used with other AWS deployment services or thirdparty tools; combining CloudFormation with more specialized deployment services to manage deployments of application code onto infrastructure components AWS offers extensions to the CloudFormation service in addition to its base features: •AWS Cloud Development Kit (AWS CDK) (AWS CDK) is an open source software development kit (SDK) to programmatically model AWS infrastructure with TypeScript Python Java or NET •AWS Serverless Application Model (SAM) is an open source framework to simplify building serverless applications on AWS Table 1: AWS CloudFormation deployment features Capability Description Provision CloudFormation will automatically create and update infrastructure components that are defined in a template Refer to AWS CloudFormation Best Practices for more details on creating infrastructure using CloudFormation templates Configure CloudFormation templates offer extensive flexibility to customize and update all infrastructure components 3Overview of Deployment Options on AWS AWS Whitepaper AWS CloudFormation Capability Description Refer to CloudFormation Template Anatomy for more details on customizing templates Deploy Update your CloudFormation templates to alter the resources in a stack Depending on your application architecture you may need to use an additional deployment service to update the application version running on your infrastructure Refer to Deploying Applications on EC2 with AWS CloudFormation for more details on how CloudFormation can be used as a deployment solution Scale CloudFormation will not automatically handle infrastructure scaling on your behalf; however you can configure auto scaling policies for your resources in a CloudFormation template Monitor CloudFormation provides native monitoring of the success or failure of updates to infrastructure defined in a template as well as “drift detection” to monitor when resources defined in a template do not meet specifications Additional monitoring solutions will need to be in place for application level monitoring and metrics Refer to Monitoring the Progress of a Stack Update for more details on how CloudFormation monitors infrastructure updates The following diagram shows a common use case for CloudFormation Here CloudFormation templates are created to define all infrastructure components necessary to create a simple threetier web application In this example we are using bootstrap scripts defined in CloudFormation to deploy the latest version of our application onto EC2 instances; however it is also a common practice to combine additional deployment services with CloudFormation (using CloudFormation only for its infrastructure management and provisioning capabilities) Note that more than one CloudFormation template is used to create the infrastructure 4Overview of Deployment Options on AWS AWS Whitepaper AWS Elastic Beanstalk Figure 1: AWS CloudFormation use case AWS Elastic Beanstalk AWS Elastic Beanstalk is an easytouse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go or Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk is a complete application management solution and manages all infrastructure and platform tasks on your behalf With Elastic Beanstalk you can quickly deploy manage and scale applications without the operational burden of managing infrastructure Elastic Beanstalk reduces management complexity for web applications making it a good choice for organizations that are new to AWS or wish to deploy a web application as quickly as possible When using Elastic Beanstalk as your deployment solution simply upload your source code and Elastic Beanstalk will provision and operate all necessary infrastructure including servers databases load balancers networks and auto scaling groups Although these resources are created on your behalf you retain full control of these resources allowing developers to customize as needed Table 2: AWS Elastic Beanstalk Deployment Features Capability Description Provision Elastic Beanstalk will create all infrastructure components necessary to operate a web application or service that runs on one of its supported platforms If you need additional infrastructure this will have to be created outside of Elastic Beanstalk 5Overview of Deployment Options on AWS AWS Whitepaper AWS Elastic Beanstalk Capability Description Refer to Elastic Beanstalk Platforms for more details on the web application platforms supported by Elastic Beanstalk Configure Elastic Beanstalk provides a wide range of options for customizing the resources in your environment Refer to Configuring Elastic Beanstalk environments for more information about customizing the resources that are created by Elastic Beanstalk Deploy Elastic Beanstalk automatically handles application deployments and creates an environment that runs a new version of your application without impacting existing users Refer to Deploying Applications to AWS Elastic Beanstalk for more details on application deployments with Elastic Beanstalk Scale Elastic Beanstalk will automatically handle scaling of your infrastructure with managed auto scaling groups for your application instances Refer to Auto Scaling Group for your Elastic Beanstalk Environment for more details about auto scaling with Elastic Beanstalk Monitor Elastic Beanstalk offers builtin environment monitoring for applications including deployment success/failures environment health resource performance and application logs Refer to Monitoring an Environment for more details on fullstack monitoring with Elastic Beanstalk Elastic Beanstalk makes it easy for web applications to be quickly deployed and managed in AWS The following example shows a general use case for Elastic Beanstalk as it is used to deploy a simple web application 6Overview of Deployment Options on AWS AWS Whitepaper AWS CodeDeploy Figure 2: AWS Elastic Beanstalk use case AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates application deployments to compute services such as Amazon EC2 Amazon Elastic Container Service (Amazon ECS) AWS Lambda or onpremises servers Organizations can use CodeDeploy to automate deployments of an application and remove error prone manual operations from the deployment process CodeDeploy can be used with a wide variety of application content including code serverless functions configuration files and more CodeDeploy is intended to be used as a “building block” service that is focused on helping application developers deploy and update software that is running on existing infrastructure It is not an endtoend application management solution and is intended to be used in conjunction with other AWS deployment services such as AWS CodeStar AWS CodePipeline other AWS Developer Tools and thirdparty services (see AWS CodeDeploy Product Integrations for a complete list of product integrations) as part of a complete CI/CD pipeline Additionally CodeDeploy does not manage the creation of resources on behalf of the user Table 3: AWS CodeDeploy deployment features Capability Description Provision CodeDeploy is intended for use with existing compute resources and does not create resources on your behalf CodeDeploy requires compute resources to be organized into a construct called a “deployment group” in order to deploy application content Refer to Working with Deployment Groups in CodeDeploy for more details on linking CodeDeploy to compute resources 7Overview of Deployment Options on AWS AWS Whitepaper AWS CodeDeploy Capability Description Configure CodeDeploy uses an application specification file to define customizations for compute resources Refer to CodeDeploy AppSpec File Reference for more details on the resource customizations with CodeDeploy Deploy Depending on the type of compute resource that CodeDeploy is used with CodeDeploy offers different strategies for deploying your application Refer to Working with Deployments in CodeDeploy for more details on the types of deployment processes that are supported Scale CodeDeploy does not support scaling of your underlying application infrastructure; however depending on your deployment configurations it may create additional resources to support blue/ green deployments Monitor CodeDeploy offers monitoring of the success or failure of deployments as well as a history of all deployments but does not provide performance or applicationlevel metrics Refer to Monitoring Deployments in CodeDeploy for more details on the types of monitoring capabilities offered by CodeDeploy The following diagram illustrates a general use case for CodeDeploy as part of a complete CI/CD solution In this example CodeDeploy is used in conjunction with additional AWS Developer Tools namely AWS CodePipeline (automate CI/CD pipelines) AWS CodeBuild (build and test application components) and AWS CodeCommit (source code repository) to deploy an application onto a group of EC2 instances Figure 3: AWS CodeDeploy use case 8Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Container Service Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster Amazon ECS eliminates the need to install operate and scale container management infrastructure and simplifies the creation of environments with familiar AWS core features like Security Groups Elastic Load Balancing and AWS Identity and Access Management (IAM) When running applications on Amazon ECS you can choose to provide the underlying compute power for your containers with Amazon EC2 instances or with AWS Fargate a serverless compute engine for containers In either case Amazon ECS automatically places and scales your containers onto your cluster according to configurations defined by the user Although Amazon ECS does not create infrastructure components such as Load Balancers or IAM Roles on your behalf the Amazon ECS service provides a number of APIs to simplify the creation and use of these resources in an Amazon ECS cluster Amazon ECS allows developers to have direct finegrained control over all infrastructure components allowing for the creation of custom application architectures Additionally Amazon ECS supports different deployment strategies to update your application container images Table 4: Amazon ECS deployment features Capability Description Provision Amazon ECS will provision new application container instances and compute resources based on scaling policies and Amazon ECS configurations Infrastructure resources such as Load Balancers will need to be created outside of Amazon ECS Refer to Getting Started with Amazon ECS for more details on the types of resources that can be created with Amazon ECS Configure Amazon ECS supports customization of the compute resources created to run a containerized application as well as the runtime conditions of the application containers (eg environment variables exposed ports reserved memory/CPU) Customization of underlying compute resources is only available if using Amazon EC2 instances Refer to Creating a Cluster for more details on how to customize an Amazon ECS cluster to run containerized applications Deploy Amazon ECS supports several deployment strategies for you containerized applications Refer to Amazon ECS Deployment Types for more details on the types of deployment processes that are supported Scale Amazon ECS can be used with autoscaling policies to automatically adjust the number of containers running in your Amazon ECS cluster 9Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Kubernetes Service Capability Description Refer to Service Auto Scaling for more details on configuring auto scaling for your containerized applications on Amazon ECS Monitor Amazon ECS supports monitoring compute resources and application containers with CloudWatch Refer to Monitoring Amazon ECS for more details on the types of monitoring capabilities offered by Amazon ECS The following diagram illustrates Amazon ECS being used to manage a simple containerized application In this example infrastructure components are created outside of Amazon ECS and Amazon ECS is used to manage the deployment and operation of application containers on the cluster Figure 4: Amazon ECS use case Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) is a fullymanaged certified Kubernetes conformant service that simplifies the process of building securing operating and maintaining Kubernetes clusters on AWS Amazon EKS integrates with core AWS services such as CloudWatch Auto Scaling Groups and IAM to provide a seamless experience for monitoring scaling and load balancing your containerized applications Amazon EKS also integrates with AWS App Mesh and provides a Kubernetesnative experience to consume service mesh features and bring rich observability traffic controls and security features to applications Amazon EKS provides a scalable highlyavailable control plane for Kubernetes workloads When running applications on Amazon EKS as with Amazon ECS you can choose to provide the underlying compute power for your containers with EC2 instances or with AWS Fargate Table 5: Amazon EKS deployment features 10Overview of Deployment Options on AWS AWS Whitepaper Amazon Elastic Kubernetes Service Capability Description Provision Amazon EKS provisions certain resources to support containerized applications: •Load Balancers if needed •Compute Resources (“workers”) Amazon EKS supports Windows and Linux •Application Container Instances (“pods”) Refer to Getting Started with Amazon EKS for more details on Amazon EKS cluster provisioning Configure Amazon EKS supports customization of the compute resources (“workers”) if using EC2 instances to supply compute power EKS also supports customization of the runtime conditions of the application containers (“pods”) Refer to Worker Nodes and Fargate Pod Configuration documentation for more details Deploy Amazon EKS supports the same deployment strategies as Kubernetes see Writing a Kubernetes Deployment Spec > Strategy for more details Scale Amazon EKS scales workers with Kubernetes Cluster Autoscaler and pods with Kubernetes Horizontal Pod Autoscaler and Kubernetes Vertical Pod Autoscaler Monitor The Amazon EKS control plane logs provide audit and diagnostic information directly to CloudWatch Logs The Amazon EKS control plane also integrates with AWS CloudTrail to record actions taken in Amazon EKS Refer to Logging and Monitoring Amazon EKS for more details Amazon EKS allows organizations to leverage open source Kubernetes tools and plugins and can be a good choice for organizations migrating to AWS with existing Kubernetes environments The following diagram illustrates Amazon EKS being used to manage a general containerized application 11Overview of Deployment Options on AWS AWS Whitepaper AWS OpsWorks Figure 5: Amazon EKS use case AWS OpsWorks AWS OpsWorks is a configuration management service that enables customers to construct manage and operate a wide variety of application architectures from simple web applications to highly complex custom applications Organizations deploying applications with OpsWorks use the automation platforms Chef or Puppet to manage key operational activities like server provisioning software configurations package installations database setups scaling and code deployments There are three ways to use OpsWorks: •AWS OpsWorks for Chef Automate: fully managed configuration management service that hosts Chef Automate •AWS OpsWorks for Puppet Enterprise: fully managed configuration management service that hosts Puppet Enterprise •AWS OpsWorks Stacks: application and server management service that supports modeling applications using the abstractions of “stacks” and “layers” that depend on Chef recipes for configuration management With OpsWorks for Chef Automate and OpsWorks for Puppet Enterprise AWS creates a fully managed instance of Chef or Puppet running on Amazon EC2 This instance manages configuration deployment and monitoring of nodes in your environment that are registered to the instance When using OpsWorks with Chef Automate or Puppet Enterprise additional services (eg CloudFormation) may need to be used to create and manage infrastructure components that are not supported by OpsWorks OpsWorks Stacks provides a simple and flexible way to create and manage application infrastructure When working with OpsWorks Stacks you model your application as a “stack” containing different “layers” A layer contains infrastructure components necessary to support a particular application function such as load balancers databases or application servers OpsWorks Stacks does not require the creation of a Chef server but uses Chef recipes for each layer to handle tasks such as installing packages on instances deploying applications and managing other resource configurations OpsWorks Stacks will create and provision infrastructure on your behalf but does not support all AWS services 12Overview of Deployment Options on AWS AWS Whitepaper AWS OpsWorks Provided that a node is network reachable from an OpsWorks Puppet or Chef instance any node can be registered with the OpsWorks making this solution a good choice for organizations already using Chef or Puppet and working in a hybrid environment With OpsWorks Stacks an onpremises node must be able to communicate with public AWS endpoints Table 6: AWS OpsWorks deployment features Capability Description Provision OpsWorks Stacks can create and manage certain AWS services as part of your application using Chef recipes With OpsWorks for Chef Automate or Puppet Enterprise infrastructure must be created elsewhere and registered to the Chef or Puppet instance Refer to Create a New Stack for more details on creating resources with OpsWorks Stacks Configure All OpsWorks operating models support configuration management of registered nodes OpsWorks Stacks supports customization of other infrastructure in your environment through layer customization Refer to OpsWorks Layer Basics for more details on customizing resources with OpsWorks Layers Deploy All OpsWorks operating models support deployment and update of applications running on registered nodes Refer to Deploying Apps for more details on how to deploy applications with OpsWorks Stacks Scale OpsWorks Stacks can handle automatically scaling instances in your environment based on changes in incoming traffic Refer to Using Automatic Loadbased Scaling for more details on auto scaling with OpsWorks Stacks Monitor OpsWorks provides several features to monitor your application infrastructure and deployment success In addition to Chef/Puppet logs OpsWorks provides a set of configurable Amazon CloudWatch and AWS CloudTrail metrics for full stack monitoring Refer to Monitoring Stacks using Amazon CloudWatch for more details on resource monitoring in OpsWorks OpsWorks provides a complete flexible and automated solution that works with existing and popular tools while allowing application owners to maintain fullstack control of an application The following example shows a typical use case for AWS OpsWorks Stacks as it is used to create and manage a three tier web application 13Overview of Deployment Options on AWS AWS Whitepaper Additional Deployment Services Figure 6: AWS OpsWorks Stacks use case This next example shows a typical use case for AWS OpsWorks for Chef Automate or Puppet Enterprise as it is used to manage the compute instances of a web application Figure 7: AWS OpsWorks with Chef Automate or Puppet Enterprise use case Additional Deployment Services Amazon Simple Storage Service (Amazon S3) can be used as a web server for static content and single page applications (SPA) Combined with Amazon CloudFront to increase performance in static content delivery using Amazon S3 can be a simple and powerful way to deploy and update static content More details on this approach can be found in Hosting Static Websites on AWS whitepaper 14Overview of Deployment Options on AWS AWS Whitepaper Prebaking vs Bootstrapping AMIs Deployment Strategies In addition to selecting the right tools to update your application code and supporting infrastructure implementing the right deployment processes is a critical part of a complete wellfunctioning deployment solution The deployment processes that you choose to update your application can depend on your desired balance of control speed cost risk tolerance and other factors Each AWS deployment service supports a number of deployment strategies This section will provide an overview of generalpurpose deployment strategies that can be used with your deployment solution Prebaking vs Bootstrapping AMIs If your application relies heavily on customizing or deploying applications onto Amazon EC2 instances then you can optimize your deployments through bootstrapping and prebaking practices Installing your application dependencies or customizations whenever an Amazon EC2 instance is launched is called bootstrapping an instance If you have a complex application or large downloads required this can slow down deployments and scaling events An Amazon Machine Image (AMI) provides the information required to launch an instance (operating systems storage volumes permissions software packages etc) You can launch multiple identical instances from a single AMI Whenever an EC2 instance is launched you select the AMI that is to be used as a template Prebaking is the process of embedding a significant portion of your application artifacts within an AMI Prebaking application components into an AMI can speed up the time to launch and operationalize an Amazon EC2 instance Prebaking and bootstrapping practices can be combined during the deployment process to quickly create new instances that are customized to the current environment Refer to Best practices for building AMIs for more details on creating optimized AMIs for your application Blue/Green Deployments A blue/green deployment is a deployment strategy in which you create two separate but identical environments One environment (blue) is running the current application version and one environment (green) is running the new application version Using a blue/green deployment strategy increases application availability and reduces deployment risk by simplifying the rollback process if a deployment fails Once testing has been completed on the green environment live application traffic is directed to the green environment and the blue environment is deprecated A number of AWS deployment services support blue/green deployment strategies including Elastic Beanstalk OpsWorks CloudFormation CodeDeploy and Amazon ECS Refer to Blue/Green Deployments on AWS for more details and strategies for implementing blue/green deployment processes for your application Rolling Deployments A rolling deployment is a deployment strategy that slowly replaces previous versions of an application with new versions of an application by completely replacing the infrastructure on which the application 15Overview of Deployment Options on AWS AWS Whitepaper InPlace Deployments is running For example in a rolling deployment in Amazon ECS containers running previous versions of the application will be replaced onebyone with containers running new versions of the application A rolling deployment is generally faster to than a blue/green deployment; however unlike a blue/ green deployment in a rolling deployment there is no environment isolation between the old and new application versions This allows rolling deployments to complete more quickly but also increases risks and complicates the process of rollback if a deployment fails Rolling deployment strategies can be used with most deployment solutions Refer to CloudFormation Update Policies for more information on rolling deployments with CloudFormation; Rolling Updates with Amazon ECS for more details on rolling deployments with Amazon ECS; Elastic Beanstalk Rolling Environment Configuration Updates for more details on rolling deployments with Elastic Beanstalk; and Using a Rolling Deployment in AWS OpsWorks for more details on rolling deployments with OpsWorks InPlace Deployments An inplace deployment is a deployment strategy that updates the application version without replacing any infrastructure components In an inplace deployment the previous version of the application on each compute resource is stopped the latest application is installed and the new version of the application is started and validated This allows application deployments to proceed with minimal disturbance to underlying infrastructure An inplace deployment allows you to deploy your application without creating new infrastructure; however the availability of your application can be affected during these deployments This approach also minimizes infrastructure costs and management overhead associated with creating new resources Refer to Overview of an InPlace Deployment for more details on using inplace deployment strategies with CodeDeploy Combining Deployment Services There is not a “one size fits all” deployment solution on AWS In the context of designing a deployment solution it is important to consider the type of application as this can dictate which AWS services are most appropriate To deliver complete functionality to provision configure deploy scale and monitor your application it is often necessary to combine multiple deployment services A common pattern for applications on AWS is to use CloudFormation (and its extensions) to manage generalpurpose infrastructure and use a more specialized deployment solution for managing application updates In the case of a containerized application CloudFormation could be used to create the application infrastructure and Amazon ECS and Amazon EKS could be used to provision deploy and monitor containers AWS deployment services can also be combined with thirdparty deployment services This allows organizations to easily integrate AWS deployment services into their existing CI/CD pipelines or infrastructure management solutions For example OpsWorks can be used to synchronize configurations between onpremises and AWS nodes and CodeDeploy can be used with a number of thirdparty CI/CD services as part of a complete pipeline 16Overview of Deployment Options on AWS AWS Whitepaper Conclusion AWS provides number of tools to simplify and automate the provisioning of infrastructure and deployment of applications; each deployment service offers different capabilities for managing applications To build a successful deployment architecture evaluate the available features of each service against the needs your application and your organization 17Overview of Deployment Options on AWS AWS Whitepaper Contributors Contributors to this document include: •Bryant Bost AWS ProServe Consultant 18Overview of Deployment Options on AWS AWS Whitepaper Further Reading For additional information see: •AWS Whitepapers page 19Overview of Deployment Options on AWS AWS Whitepaper Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Minor update (p 20) Blue/Green Deployments section revised for clarityApril 8 2021 Whitepaper updated (p 20) Updated with latest services and featuresJune 3 2020 Initial publication (p 20) Whitepaper first published March 1 2015 20Overview of Deployment Options on AWS AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved 21
|
General
|
consultant
|
Best Practices
|
Overview_of_Oracle_EBusiness_Suite_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtml Overview of Oracle E Business Suite on AWS First Published May 2017 Updated September 10 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 3 Contents Introduction 5 AWS overview 5 Amazon Web Services concepts 6 Region s and Availability Zones 6 Elastic Load Balancing 7 Amazon Elastic Block Store (Amazon EBS) 8 Amazon Machine Image (AMI) 8 Amazon S imple Storage Service (Amazon S3) 8 Amazon Route 53 8 Amazon Virtual Private Cloud (Amazon VPC) 8 Amazon Elastic File System (Amazon EFS) 9 AWS security and compliance 9 Oracle E Business Suite on AWS 9 Oracle E Business Suite components 10 Oracle E Business Suite architecture on AWS 11 Benefits of Oracle E Business Suite on AWS 15 Oracle E Business Suite on AWS use cases 18 Conclusion 18 Contri butors 18 Further reading 19 Document versions 19 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 4 Abstract Oracle E Business Suite is a popular suite of integrated business applications for automating enterprise wide processes like customer relationship management financial management and supply chain management Th is is the first whitepaper in a series focused on Oracle E Business Suite on Amazon Web Services (AWS) It provides an architectural overview for running Oracle E Business Suite 122 on AWS The whitepaper series is intended for customers and partners who want to learn about the benefits and options for running Oracle E Busines s Suite on AWS Subsequent whitepapers in this series will discuss advanced topics and outline best practices for high availability security scalability performance migration disaster recovery and management of Oracle E Business Suite systems on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overv iew of Oracle E Business Suite on AWS 5 Introduction Almost all large enterprises use enterprise resource planning (ERP) systems for managing and optimizing enterprise wide business processes Cloud adoption among enterprises is growing rapidly with many adopting a cloud first strategy for new projects and migrating their existing systems from on premises to AWS ERP systems such as Oracle E Business Suite are mission c ritical for most enterprises and figure prominently in considerations for planning an enterprise cloud migration This whitepaper provide s a brief overview of Oracle E Business Suite and a reference architecture for deploying Oracle E Business Suite on AWS It also discuss es the benefits of running Oracle E Business suite on AWS and various use cases AWS overview AWS provides on demand computing resources and services in the cloud with pay as yougo pricing As of the date of this publication AWS serves over a million active customers in more than 190 countries and is available in 25 AWS Regions worldwide You can run a server on AWS and log in configure secure and operate it just as you would operate a server in your own data center Using AWS resources for your compute needs is like purchasing electricity from a power company instead of runn ing your own generator and it provides many of the same benefits: • The capacity you get exactly matches your needs • You pay only for what you use • Economies of scale result in lower costs • The service is provided by a vendor who is experienced in running l argescale compute and network systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 6 Amazon Web Services concepts This section describes the AWS infrastructure and services that are part of the reference architecture for running Oracle E Business Suite on AWS Regions and Availability Zones Each Region is a separate geographi c area isolated from the other R egions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resourc es aren't replicated across R egions unless you do so specifically An AWS account provides multiple Regions so you can launch your application in locations that meet your requirements For example you might w ant to launch your application in Europe to be closer to your European customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independe nt infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment are not shared across Availability Zones Because Availability Zones are physically separate even extremely uncommon disasters such as fires tornados or flooding would only affect a single Availability Zone Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links The following figure illustrates the relationship between Regions and Availability Zones Relationship between AWS Regions and Availability Zones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 7 The following figure shows the Regions and the number of Availability Zones in each Region provided by an AWS account at the time of this publication For the most current list of Regions and Availability Zones see Global Infrastructure Note : You can’t describe or access additional Regions from the AWS GovCloud (US) Region or China (Beijing) Region Map of AWS Regions and Availability Zones Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a web service that provides resizable compute capacity in the cloud billed by the hour or second (minimum of 60 seconds) You can run virtual machines (EC2 instances) ranging in size from one vCPU and one GB memory to 448 vCPU and 6six TB memory You have a choice of operating systems including Windows Server 2008/2012 /2016/2019 Oracle Linux Red Hat Enterprise Linux and SUSE Linux Elastic Load Balanc ing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances containers and IP addresses in one or mor e Availability Zones o n AWS Cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing can be used for load balancing web server traffic This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 8 Amazon Elastic Block Store (Amazon EBS) Amazon EBS provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image (AMI) An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your instance Your AMIs are your unit of deployment A mazon EC2 uses Amazon EBS and Amazon Simple Storage Service (Amazon S3) to provide reliable scalable storage of your AMIs so th ey can boot when you need them Amazon Simple Storage Service (Amazon S3) Amazon S3 provides developers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use It provides a simple web services interface you can use to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable clou d Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and costeffective way to route end users to internet applications by translating names like wwwexamplecom into the numeric IP address Amazon Virtual Private Cloud (Amazon VPC) Amazon VPC enables you to provision a logically isolated section of the AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 9 You can use multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and use the AWS Cloud as an extension of your corporate data center Amazon Elastic File System (Amazon EFS) Amazon EFS is a file storage service for EC2 instances Amazon EFS supports the NFS v4 protocol so the applications and tools that you use today work seamlessly with Amazon EFS Multiple EC2 instances can access an Amazon EFS file system at the same time providing a common data source for workloads and applications running on more than one instance With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files so your applications have the storage they need when the y need it AWS security and compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center —but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more see AWS Cloud Security AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and indepe ndent auditors to provide customers with extensive information regarding the policies processes and controls established and operated by AWS To learn more see AWS Compliance Oracle E Business Suite o n AWS This section cover s the major components of Oracle E Business Suite and its architecture on AWS It is important to have a good understanding of Oracle E Business Suite architecture and its major components to successfully deploy and configure it on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 10 Oracle E Business Suite components Oracle E Business Suite has a three tier architecture consisting of client application and database ( DB) tiers Oracle E Business Suite three tier architecture The client tier contains the client user interface which is provided through HTML or Java applets in a web browser for forms based applications The application tier consists of Oracle Fusion Middleware (Oracle HTTP Server and Oracle WebLogic Server) and the concurrent processing server The Fus ion Middleware server has HTTP Java and Forms services that process the business logic and talk to the database tier The Oracle HTTP Server (OHS) accepts incoming HTTP requests from clients and routes the requests to the Oracle Web Logic Server (WLS) which hosts the business logic and other server side components The HTTP services forms services and concurrent processing server can be installed on multiple application tier nodes and load balanced The database tier consists of an Oracle database tha t stores the data for Oracle E Business Suite This tier has the Oracle database run items and the Oracle database files that physically store the tables indexes and other database objects in the system See the Oracle E Business Suite Concepts guide for a deeper dive on the Oracle E Business Suite architecture components This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of O racle E Business Suite on AWS 11 Oracle E Business Suite architecture on AWS The following reference diagram illustrates how Oracle E Business Suite can be deployed on AWS The application and database tiers are deployed across multiple Availability Zones for high availability Sample Oracle E Business Suite deployment on AWS User requests from the client tier are routed using Amazon Route53 DNS to the Oracle EBusiness Suite application servers deployed on EC2 instances through Application Load Balancer The OHS and the Oracle WLS are deployed on each application tier instance The OHS accept s the requests from Application Load Balancer and route s them to the Oracle WLS The Oracle WLS runs the appropriate business logic and communicate s with the Oracle database The various modules and functions within Oracle E Business Suite share a common data model There is only one Oracle d atabase instance for multiple application tier nodes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 12 Load balancing and high availability Application Load Balancer is used to distribute incoming traffic across multiple application tier instances deployed across multiple Availability Zones You can add and remove application tier instances from your load balancer as your needs change without disrupting the overall flow of information Application Load Balancer ensures that only healthy instances receive traffic by detecting unhealthy instances and rerou ting traffic across the remaining healthy instances If an application tier instance fails Application Load Balanc er automatically reroutes the traffic to the remaining running application tier instances In the unlikely event of an Availability Zone fai lure user traffic is routed to the remaining application tier instances in the other Availability Zone Other third party load balancers like the F5 BIG IP are available on AWS Marketplace and can be used as well See My Oracle Support document 13756861 for more details on using load balancers with Oracle E Business Suite (sign in required) The database tier is deployed on Oracle running on two EC2 instances in different Availability Zones Oracle Data Guard replication (maximum protection or maximum availability mode) is configured between the primary database in one Availability Zone and a standby database in another Availability Zone In case of failure of the primary database the standby database is promoted as the primary and the application tier instances will connect to it For more details on deploying Oracle Database on AWS see the Oracle Database on AWS Quick Start Scalability When using AWS you can scale your application easily due to the elastic nature of the cloud You can scale up the O racle E Business Suite application tier and database tier instances simply by changing the instance type to a larger instance type For example you can start with an r 5large instance with two vCPUs and 1 6 GiB RAM and scale up all the way to an x1 e32xlar ge instance with 128 vCPUs and 3904 GiB RAM After selecting a new instance type only a restart is required for the changes to take effect Typically the resizing operation is complete d in a few minutes the EBS volumes remain attached to the instances and no data migration is required You can scale out the application tier by adding and configuring more application tier instances when required You can l aunc h a new EC2 instance in a few minutes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 13 However additional work is required to ensure that the AutoConfig files are correct and the new application tier instance is correctly configured and registered with the database Although it might be possible to automate scaling out the application tier using scripting this require s an additional technical investment A simpler alternative might be to use standby EC2 instances as explained in the next section Standby EC2 i nstances To meet extra capacity requirements additional application tier instances of Oracle E Business Suite can be pre installed and configured on EC2 instances These standby instances can be shut down until extra capacity is required Charges are not incurred when EC2 instances are shut down —only EBS storage charges are incurred At the time of this publication EBS General Purpose (gp2) volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an EC2 instance with 120 GB hard disk drive ( HDD ) space the storage ch arge is only $12 per month These preinstalled standby instances provide you the flexibility to use these instances for meeting additional capacity needs as and when required In this model you need to ensure that any configuration changes/patching/maint enance activities are also applied to the standby node to avoid inconsistencies Storage options and backup AWS offers a complete range of cloud storage services to support both application and archival compliance requirements You can choose from object file block and archival services The following table list s some of the storage options and how they can be used when deploying Oracle E Business Suite on AWS Table 1 – Storage options and how they can be used Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – gp2/gp3 volumes SSDbased block storage with up to 16000 input/output operations per second ( IOPS ) per volume Boot volumes operating system and software binaries Oracle database archive logs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracl e EBusiness Suite on AWS 14 Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – io1/io2/io2 Block Express volumes SSDbased block storage with up to 64000 IOPS per volume Multiple volumes can be striped together for higher IOPS By attaching io2 volumes to r5b instan ce types you can achieve up to 256000 IOPS per volume Storage for the database tier – ASM disks Oracle data files redo logs Amazon EFS Highly durable NFSv41 compatible file system PCP out and log files media staging Amazon S3 Object store with 99999999999% durability Backups archives media staging Amazon Glacier Extremely low cost and highly durable storage for long term backup and archival Long term backup and archival Amazon EC2 instance storage Ephemeral or temporary storage data persists only for the lifetime of the instance Swap temporary files reports cache Web Server cache The application and database servers use EBS volumes for persistent block storage Amazon EBS has two types of solidstate drive ( SSD)backed volumes : provisioned IOPS SSD (io 1 io2 io2 Block Express ) for latency sensitive database and application workloa ds and general purpose SSD (gp2 gp3) that balance s price and performance for a wide variety of transactional workloads including dev elopment and test environments and boot volumes General purpose SSD volumes provide good balance between price and performance and can be used for boot volumes the Oracle E Business Suite application tier file system and logs They are designed to offer single digit millisecond latencies and deliver a consistent baselin e performance of 3 IOPS/GB for gp2 and 3000 IOPS regardless of volume size for gp3 to a maximum of 1 6000 IOPS per volume Provisioned IOPS volumes are the highest performance EBS storage option and should be used along with Oracle Automatic Storage Manag ement (ASM) for storing the Oracle database data and log files You can provision up to 64000 IOPS per io1/io2 volume and 256000 per io2 Block Express These volumes are designed to achieve single digit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 15 millisecond latencies and to deliver the provisione d IOPS 999% for i01 and 99999% of the time for io2 and io2 Block Express You can use Oracle ASM to stripe the data across multiple EBS volumes for higher IOPS and to scale the database storage To maximize the performance of EBS volumes use EBSoptimized EC2 instances and instances based on the AWS Nitro System EC2 instances have temporary SSD based block storage called instance stora ge Instance storage persists only for the lifetime of the instance and should not be used to store valuable long term data Instance storage can be used as swap space and for storing temporary files like the report cache or web server cache If you are u sing Oracle Linux as the operating system for the database server you can use the instance storage for the Oracle Database Smart Flash Cache and improve the database performance Parallel Concurrent Processing (PCP) allows you to distribute concurrent managers across multiple nodes so that you can use the available capacity and provide failover You can use a shared file system such as Amazon EFS for storing the log and out files while implementing PCP in Oracle E Business Suite However this configuration may not be ideal for environments with an extremely large number of log and out files Oracle E Business Suite Release 122 introduced a new environment variable APPLLDM to specify whether log and out files are stored in a single directory for all Oracle E Business Suite products or in one subdirectory per product APPLLDM can be set to ‘single’ or ‘product’ ‘Product’ will avoid highest concentration of log and out files in a single directory and may avoid perf ormance issues Amazon S3 provides low cost scalable and highly durable storage and should be used for storing backups You can use Oracle Recovery Manager (RMAN) to back up your database then copy the data to Amazon S3 Alternatively you can use the O racle Secure Backup (OSB) Cloud Module to back up your database The OSB Cloud Module is fully integrated with RMAN features and functionality and the backups are sent directly to Amazon S3 for storage Benefits of Oracle E Business Suite on AWS The follo wing sections discuss some of the key benefits of running Oracle E Business Suite on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 16 Agility and speed Traditional deployment involves a long procurement process in which each stage is timeintensive and requires large capital outlay and multiple approvals With AWS you can provision new infrastructure and Oracle E Business Suite environments in minutes compared to waiting weeks or months to procure and deploy traditional infrastructure Lower total cost of ownership In an o npremise s environment you typically pay hardware support costs virtualization licensing and support data center costs and so on You can eliminate or reduce all of these costs by moving to AWS You benefit from the economies of scale and efficiencies provided by AWS and pay only for the compute storage and other resources you use Cost savings for nonproduction environments You can shut down your non production environments when you are not using them and save costs For example if you are using a development environment for only 40 hours a week ( eight hours a day five days a week) you can shut down the environment when it’ s not in use You pay only for 40 hours of Amazon EC2 compute charges instead of 168 hours (24 hours a day seven days a week) for an on premises environment running all the time; this can result in a saving of 75% for EC2 compute charges Replace capital expenditure ( CapEx ) with operating expenditure (OpEx ) You can s tart an Oracle E Business Suite implementation or project on AWS without any upfront cost or commitment for compute storage or network infrastructure Unlimited environments In an o npremise s environment you usually have a limited set of environments to work with; provisioning additional environments take s a long time or might not be possible at all You do not face these restrictions when using AWS ; you can create virtually any number of new environments in minutes as required You can have a different environment for each major project so that each team can work independently with the resources they need without interfering with other teams ; the teams can then converge at a common integrati on environment when they are This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 17 ready You can shut down these environments when the project finishes and stop paying for them Have Moore’s Law work for you instead of against you Moore's Law refers to the observation that the number of transistors on a microchip doubles every two years In an on premises environment you end up owning hardware that depreciat es in value every y ear You are locked into the price and capacity of the hardware after it is acquired plus you have ongoing hardware support costs With AWS you can switch your underlying instances to the faster more powerful next generation AWS instance types as they b ecome available Right size anytime Customer s often oversize environments for initial phases and are then unable to cope with growth in later phases With AWS you can scale the usage up or down at any time You pay only for the computing capacity you use for the duration you use it Instance sizes can be changed in minutes through the AWS Management Console or the AWS Application Programming Interface (API) or Command Line Interface (CLI) Assess the resource usage on current system and launch with appr opriate size instances for the enterprise resource planning ( ERP) environment to reduce the cost overhead Lowcost disaster recovery You can build extremely low cost standby disaster recovery environments for your existing deployments and incur costs only for the duration of the outage CloudEndure Disaster Recovery for Oracle brings significant savin gs on disaster recovery total cost of ownership ( TCO ) compared to traditional disaster recovery solution s Ability to test application performance Although performance testing is recommended prior to any major change to an Oracle EBusiness Suite environme nt most customers only performance test their Oracle E Business Suite application during the initial launch in the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of environment requi red for performance testing With AWS you can minimize the risk of discovering performance issues later in production An AWS Cloud environment can be created easily and quickly just for the duration of the performance test and only used when needed Aga in you are charged only for the hours the environment is used This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 18 No end of life for hardware or platform All hardware platforms have endoflife dates at which point the hardware is no longer supported and you are forced to buy new hardware again In the A WS Cloud you can simply upgrade the platform instances to new AWS instance types in a single click at no cost for the upgrade Oracle E Business Suite on AWS use cases Oracle E Business Suite customers are using AWS for a variety of use cases including the following environments: • Migration of existing Oracle E Business Suite production environments • Implementation of new Oracle E Business Suite production environments • Implementing disaster recovery environments • Running Oracle E Business Suite development test demonstration proof of concept (POC) and training environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing Conclusion AWS can be an extremely cost effective secure scala ble high perform ing and flexible option for deploying Oracle E Business Suite This whitepaper outline s some of the benefits and use cases for deploying Oracle E Business Suite on AWS If you are looking for migration specific guidance see the Migrating Oracle E Business Suite on AWS whitepaper Subsequent whitepapers in this series will cover advanced topics and outline best practices for high availability security scalability performance disaster recovery and management of Oracle E Business Suite systems on AWS Contributors Contributors to this document include : • Ejaz Sayyed Sr Partner Solutions Architect Amazon Web Services • Praveen Katari Partner Managemen t Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 19 • Ashok Sundaram Principal Solutions Architect Amazon Web Services Further reading For additional information see: • AWS Whitepapers & Guides • AWS Cloud Security • AWS Compliance • Oracle R122 Document • Using Load Balancers with Oracle EBS (Sign in to Oracle required) • Oracle Database on AWS • AWS EBS Optimized instances • Oracle APPLLDM document (Sign in to Oracle required) Document version s Date Description September 10 2021 Updated logos new EBS storage and EC2 instance types performance metrics May 2017 First publication
|
General
|
consultant
|
Best Practices
|
Overview_of_the_Samsung_Push_to_Talk_PTT_Solution_on_AWS
|
For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/ samsungpttaws/samsungpttawshtml Overview of the Samsung Push to Talk (PTT) Solution on AWS First published October 2017 Updated March 30 2021 This paper has been archived This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 AWS Overview 1 AWS Infrastructu re and Services for Samsung PTT Solution 2 Regions and Availability Zones 2 Amazon Elastic Cloud Compute 2 Elastic Load Balancing 3 Amazon Elastic Block Store 3 Amazon Machine Image 3 Amazon Simple Storage Service 3 Amazon Virtual Private Cloud 3 AWS Security and Compliance 4 AWS Features Enabling Virtualization of Samsung PTT Solution 4 Samsung PTT Solution on AWS 6 Samsung PTT Solution Components 6 Samsung PTT Architecture on AWS 7 Benefits of Samsung PTT Solution on AWS 9 Samsung PTT on AWS Use Cases 11 Conclusion 11 Contributors 11 Document Revisions 12 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract The Samsung Push to Talk (PTT) solution is a popular suite of integrated components that enabl es mobile workforce communication This whitepaper provides an architectural overview for running the Samsung PTT solution suite on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 1 Introduction All major enterprises public safety and communications service organizations with mobile workforces can benefit from a Push to Talk (PTT) solution The PTT solution is a twoway radio type service that enables custom ers to push a button and instantly communicate with large audiences over a variety of devices and networks Sectors such as construction hospitality security oil and gas utilities manufacturing field services education and transportation already rely on previous generation technologies to perform this function However cloud adoption among enterprises is growing rapidly with many adopting a cloud first strategy for new projects and migrating their existing systems fro m on premises to Amazon Web Services (AWS) Enterprises can deploy the Samsung PTT solution on AWS This whitepaper provides a n overview of the Samsung PTT solution and a reference architecture for deploying Samsung PTT on AWS We also discuss the benefits of running the Samsung PTT solution on AWS and various use cases AWS Overview AWS provides on demand computing resources and services in the cloud with payasyougo pricing As of this publication AWS serves over a million active customers in more tha n 190 countries and is available in 16 AWS Regions worldwide You can access server s on AWS and log in configure secure and operate them just as you would operate server s in your own data center When you u se AWS resources for your compute needs it’s like purchasing electricity from a power company instead of running your own generator and it provides many of the same benefits including : • The capacity you get exactly matches your needs • You pay only for what you use • Economies of scale result in lower costs • The service is provided by a vendor who is experienced in running large scale compute and network systems This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 2 AWS Infrastructure and Services for Samsung PTT Solution This section describes the AWS infrastructure and services that are part of the reference architecture that you need to use to run the Samsung PTT solution on AWS Region s and Availability Zones Each AWS Region is a separate geographic area that is isolated from the other Regions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resources aren't replicated acros s Regions unless you do so specifically An AWS account provides multiple Regions so that you can launch your application s in locations that meet your requirements For example you might want to launch your application s in Europe to be closer to your Euro pean customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment a ren’t shared across Availability Zones Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links For more information about Regions an d Availability Zones see Regions and Availability Zones in the Amazon E C2 User Guide for Linux Instances For the most current list of Regions and Availability Zones see AWS Global Infrastructure Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud (Amazon EC2) is a web service th at provides resizable compute capacity in the cloud that is billed by the hour You can run virtual machines (EC2 instances) ranging in size from 1 vCPU and 1 GB memory to 128 vCPU and 2 TB memory You have a choice of operating systems including Windows S erver 2008/2012 Oracle Linux Red Hat Enterprise Linux and SUSE Linux This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 3 Elastic Load Balanc ing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing can be used for load balancing web server traffic Amazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failu re offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessa ry bits to set up and boot your EC2 instance Your AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that we can boot them when you ask us to do so Amazon Simple Storage Servic e Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use It provides a simple web services interface you can use to store and retrieve any amount of data f rom anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of th e AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 4 gateways You can leverage multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware Virtual Private Net work (VPN) connection between your corporate data center and your VPC and then you can leverage the AWS Cloud as an extension of your corporate data center AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is similar to security in your on premises data center but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security see the AWS Cloud Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide customers with extensive information regarding the p olicies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center AWS Features Enabling Virtualization of Samsung PTT Solution The feature s used to support the function virtualization of Push to Talk s olution from Samsung on AWS Cloud include the following : This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 5 • Elastic Networ k Adapter (ENA) – ENA is the next generation network interface and accompanying drivers that provide enhanced networking on EC2 instances ENA is a custom AWS network interface optimized to deliver high throughput and packet per second (PPS) performance and consistently low laten cies on EC2 instances Using ENA customers can use up to 20 Gbps of network bandwidth on specific EC2 instance types Open source licensed ENA drivers are currently available for Linux and Intel Data Plane Development Kit (Intel DPDK) The latest Amazon L inux AMI includes the ENA Linux driver support by default ENA Linux driver source code is also available on GitHub for developers to integrate in their AMIs There is no additional fee to use ENA For m ore information see the Enhanced Networking on Linux in the Amazon E C2 User Guide for Linux Instances • Support for single root I/O virtualization (SR IOV) – The single root I/O virtualization ( SRIOV) interface is an extension to the PCI Express (PCIe) specification SRIOV allows a device such as a network adapter to separate access to its resources among various PCIe hardware functions • Support for data plane development kit (DPDK) – The DPDK is a set of data plane libraries and network interface controller drivers for fast packet processing The DPDK provides a programming framework and enables faster development of highspeed data packet networking applications • Support for nonuniform memory access (NUMA) – NUMA is a design where a cluster of microprocessor s in a multiprocessing system are configured so that they can share memory locally This design improv es performance and enables expansion of the system NUMA is used in a symmetric multiprocessing (SMP ) system • Support for h uge pages – Huge pages is a mechanism that allows the Linux kernel to use the multiple page size capabilities of modern hardware architectures Linux uses pages as the basic unit of memory where physical memory is partitioned and accessed using the basic page unit This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 6 • Support for static IP addresses – Amazon EC2 instances can use static IP addresses (survives reboot) and these addresses can be associated with or dissociat ed from a different EC2 instance in any Availability Zone within a Region Samsung PTT Solution on AWS This section cover s the major components of the Samsung PTT solution and its architecture on AWS that you can use to deploy and configure it on AWS Samsung PTT Solution Components The Samsung PTT solution offers advanced 3GPP Rel13 MCPTT ( Mission Critical Push toTalk) features centralized online address book management and security —all delivered over 4G LTE 3G WCDMA/HSPA and Wi Fi networks With PTT users can carry a single device to conveniently access instant broadband data voice service workforce management and mobile productivity applications The Samsung PTT solution leverages eMBMS broadcast technology to transmit data to up to several thousand users within range of a given LTE base station This method allows an extremely rapid flow of information during crisis situations without slowing down traffic on the network Figure 1 — Push to Talk (PTT) network architecture The solution consists of three main components: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 7 • Samsung PTT serve r solution sends multimedia such as video or high quality images to thousands of devices simultaneously using a single transmission channel Each device seamlessly receives the incoming data at the same time allowing real time video communication among thousands of users In contrast when relying upon traditional unicast methods in order to send multimedia to different devices a single channel for each device is needed consuming unnecessary air link capacity significantly degrading the quality of video and potentially causing video buffering or stuttering issues • Samsung Call Session Control Function (CSCF ) is a collection of functional capabilities that play an essenti al role in the IP Multimedia Core Network Subsystem ( IMS) The CSCF is responsible for the signaling that control s the communication of IMS User Equipment (UE) with IMSenhanced services across different network access technologies and domains • Samsung Hom e Subscribe r Server (HSS ) is the main IMS database that also acts as a database in Evolved Packet Core ( EPC) The HSS is a super home location register ( HLR) that combines legacy HLR and authentication center ( AuC) functions together for circuit switched ( CS) and packet switched ( PS) domains This component architecture integrates with Long Term Evolution (LTE ) handsets eNodeB and EPC components We integrated this component architecture with AWS services via the public internet to create a test network The next sections describe how we set it up Samsung PTT Architecture on AWS The Samsung PTT solution setup included setting up a VPC with a public subnet that has a bastion host and three private subnets for CSCF HSS s and PTT server s The bear er packet processing acceleration was powered by the AWS ENA with DPDK applications and SR IOV network port capabilities The EC2 instances within each of the private subnet s reside in their respective placement groups as shown in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 8 Figure 2 — Push to Talk (PTT) deployment architecture on AWS Effective and accurate dimensioning of the solution is critical for the virtual PTT solution It’s always advisable to contact your Samsung team and get their input before implementing a solution for your organization The configuration used for validation of the PTT solution on AWS is outlined in the following table which lists each function plane number of instances instance type and feature that is enabled Table 1 — EC2 Configuration used for Samsung PTT Solution Validation Function Plane Number of Instances Instance Type Features Enabled CSCF Control plane 1 c44xlarge DPDK SRIOV CSCF User plane 1 m42xlarge DPDK SR IOV PTT Control plane 1 c44xlarge DPDK SR IOV PTT User plane 1 m42xlarge DPDK SR IOV This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 9 Function Plane Number of Instances Instance Type Features Enabled HSS Control 1 m42xlarge OSS Operations & maintenance 2 m4xlarge Not applicable Bastion Management 1 T2micro Not applicable Contact the Samsung team for accurate dimensioning of the solution for your organization Benefits of Samsung PTT Solution on AWS The following sections outline the benefits of using Samsung PTT on AWS Cost Savings for Non Prod uction Environments You can shut down your non production environments when you aren’t using them and save costs For example if you are using a development environment for only 40 hours a week (8 hours a day 5 days a week) you can shut down the enviro nment when it’s not in use You pay only for 40 hours of Amazon EC2 compute charges instead of 168 hours (24 hours a day 7 days a week) for an onpremises environment running all the time This can result in a saving of 75% for Amazon EC2 compute charges Unlimited On demand Environments In an on premises environment you usually have a limited set of environments to work with Provisioning additional environments takes a long time or might not be possible at all You don’t face these restrictions when using AWS You can create virtually a ny number of new environments in minutes as necessary You can have a different environment for each major project so that each team can work independently with the resources they need without interfering with other teams Then the teams can converge at a common integration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 10 environment when they are ready You can terminate these environments when the project finishes and stop paying for them Lower Total Cost of Ownership In an on premises environment you typically pay hardware support costs virtuali zation licensing and support data center costs etc You can eliminate or reduce all of these costs by moving to AWS You benefit from the economies of scale and efficiencies provided by AWS and pay only for the compute storage and other resources that you use Right Size Anytime Often customers oversize environments for initial phases and then they’re not able to cope with growth in later phases With AWS you can scale your organization’s usage up or down at any time You only pay for the computing capacity you use for the duration that you use it Instance sizes can be changed in minutes through the AWS Management Console the AWS application programming interface (API) or the AWS Command Line Interface (AWS CLI) Replace CapEx with OpEx You can s tart a Samsung PTT solution implementation or project on AWS without any upfront cost or commitment for compute storage or network infrastructure No Hardware Costs In an on premises environment you end up owning hardware that is depreciating in value every year You are locked into the price and capacity of the hardware once it is acquired plus you have ongoing hardware support costs With AWS you can switch your underlying instances to faster more powerful nextgeneration AWS instance types as they become available LowCost Disaster Recovery You can build low cost standby disaster recovery environments for your existing deployments and incur costs only for the duration of the outage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 11 No End of Life for Hardware or Platform All hardware platforms have end oflife dates at which point the hardware is no longer supported and you are forced to buy new hardware again In the AWS Cloud you can simply upgrade the platform instances to new AWS instance types in a single click at no c ost for the upgrade Samsung PTT on AWS Use Cases Samsung PTT partners and customers are using AWS for a variety of use cases including the following: • Implement ing new Samsung PTT production environments • Implement ing disaster recovery environments • Running Samsung PTT development test demonstration proof of concept (POC) and training environments • Scaling existing Samsung PTT production environments for incremental traffic • Setting up temporary environments for migrations and testing upgrades • Setting up temporary environments for performance testing Conclusion AWS can be an extremely cost effective secure scalable high performing and flexible option for deploying the Samsung PTT solution This whitepaper outlines some of the benefits and use cases for deploying the Samsung PTT solution on AWS Contributors The following individuals and organizations contributed to this document: • Jeong S hang Ohn Principal Engineer Samsung Network Division • Robin Harwani Global Strategic Partner Solution Lead for Telecommunications Amazon Web Services • Andy Kim Solution Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Overview of Samsung Push to Talk Solution on AWS Page 12 Document Revisions Date Description March 30 2021 Reviewed for technical accuracy October 2017 First publication
|
General
|
consultant
|
Best Practices
|
Performance_at_Scale_with_Amazon_ElastiCache
|
This paper has been archived For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/scale performanceelasticache/scaleperformanceelasticachehtml Performance at Scale with Amazon ElastiCache Published May 2015 Updated March 30 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 ElastiCache Overview 2 Alternatives t o ElastiCache 2 Memcached vs Redis 3 ElastiCache for Memcached 5 Architecture with ElastiCache for Memcached 5 Selecting the Right Cache Node Size 9 Security Groups and VPC 10 Caching Design Patterns 12 How to Apply Caching 12 Consistent Hashing (Sharding) 13 Client Libraries 15 Be Lazy 16 Write On Through 18 Expiration Date 19 The Thundering Herd 20 Cache (Almost) Everything 21 ElastiCache for Redis 22 Architecture with ElastiCache for Redis 22 Distributing Reads and Writes 24 Multi AZ with Auto Failover 25 Sharding with Redis 26 Advanced Datasets with Redis 29 Game Leaderboards 29 Recommendation Engines 30 Chat and Messaging 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Queues 31 Client Libraries and Consistent Hashing 32 Monitoring and Tuning 33 Monitoring Cache Efficiency 33 Watching for Hot Spots 34 Memcached Memory Optimization 35 Redi s Memory Optimization 36 Redis Backup and Restore 36 Cluster Scaling and Auto Discovery 37 Auto Scaling Cluster Nodes 37 Auto Discovery of Memcached Nodes 38 Cluster Reconfiguration Events from Amazon SNS 39 Conclusion 40 Contributors 41 Document Revisions 41 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Inmemory caching improves application performance by storing frequently accessed data items in memory so that they can be retrieved without acc ess to the primary data store Properly leveraging caching can result in an application that not only performs better but also costs less at scale Amazon ElastiCache is a managed service that reduces the administrative burden of deploying an in memory ca che in the cloud Beyond caching an in memory data layer also enables advanced use cases such as analytics and recommendation engines This whitepaper lays out common ElastiCache design patterns performance tuning tips and important operational conside rations to get the most out of an in memory layer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 1 Introduction An effective caching strategy is perhaps the single biggest factor in creating an app that performs well at scale A brief look at the largest web gaming and mobile ap ps reveals that all apps at significant scale have a considerable investment in caching Despite this many developers fail to exploit caching to its full potential This oversight can result in running larger database and application instances than needed Not only does this approach decrease performance and add cost but also it limits your ability to scale The in memory caching provided by Amazon ElastiCache improves application performance by storing critical pieces of data in memory for fast access Y ou can use this caching to significantly improve latency and throughput for many read heavy application workloads such as social networking gaming media sharing and Q&A portals Cached information can include the results of database queries computatio nally intensive calculations or even remote API calls In addition compute intensive workloads that manipulate datasets such as recommendation engines and highperformance computing simulations also benefit from an in memory data layer In these applications very large datasets must be accessed in real time across clusters of machines that can span hundreds of nodes Manipulating this data in a disk based store would be a significa nt bottleneck for these applications Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud Amazon ElastiCache manages the work involved in setting up an in memory service from provisioning t he AWS resources you request to installing the software Using Amazon ElastiCache you can add an in memory caching layer to your application in a matter of minutes with a few API calls Amazon ElastiCache integrates with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) as well as deployment management solutions such as AWS CloudFormation AWS Elastic Beanstalk and AWS OpsWorks In this whitepaper we'll walk through best practices f or working with ElastiCache We'll demonstrate common in memory data design patterns compare the two open source engines that ElastiCache supports and show how ElastiCache fits into real world application architectures such as web apps and online games By the end of this paper you should have a clear grasp of which caching strategies apply to your use case and how you can use ElastiCache to deploy an in memory caching layer for your app This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale w ith Amazon ElastiCache Page 2 ElastiCache Overview The Amazon ElastiCache architecture is based on the concept of deploying one or more cache clusters for your application After your cache cluster is up and running the service automates common administrative tasks such as resource provisioning failure detection and recovery and software patching Amazon ElastiCache provides detailed monitoring metrics associated with your cache nodes enabling you to diagnose and react to issues very quickly For example you can set up thresholds and receive alarms if one of your cache nod es is overloaded with requests ElastiCache works with both the Redis and Memcached engines You can launch an ElastiCache cluster by following the steps in the appropriate User Guide: • Getting Started with Amazon ElastiCache for Redis • Getting Started with Amazon ElastiCache for Memcached It's important to understand t hat Amazon ElastiCache is not coupled to your database tier As far as Amazon ElastiCache nodes are concerned your application is just setting and getting keys in a slab of memory That being the case you can use Amazon ElastiCache with relational databa ses such as MySQL or Microsoft SQL Server; with NoSQL databases such as Amazon DynamoDB or MongoDB; or with no database tier at all which is common for distributed computing applications Amazon ElastiCache gives you the flexibility to deploy one two or more different cache clusters with your application which you can use for differing types of datasets Alternatives to ElastiCache In addition to using ElastiCache you can cache data in AWS in other ways each of which has its own pros and cons To review some of the alternatives: • Amazon CloudFront content delivery network (CDN) —this approach is used to cache webpages image assets videos and other static data at the edge as close to end users as possible In addition to using CloudFront with static assets you can also place CloudFront in front of dynamic content such as web apps The important caveat here is that CloudFront only caches rendered page output In web apps games and mobile apps it's very common to have thousands of fragments of data which are reused in multiple sections of the app CloudFront is a valuable component of scaling a website but it does not obviate the need for application caching This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 3 • Amazon RDS Read Replicas —some database engines such as MySQL support the ability to attach asynchronous read replicas Although useful this ability is limited to providing data in a duplicate format of the primary database You cannot cache calculations aggregates or arbitrary custom keys in a replica Also read replicas are not as fa st as in memory caches Read replicas are more interesting for distributing data to remote sites or apps • Onhost caching —a simplistic approach to caching is to store data on each Amazon EC2 application instance so that it's local to the server for fast lookup Don't do this First you get no efficiency from your cache in this case As application instances scale up they start with an empty cache meaning they end up hammering the data tier Second cache invalidation becomes a nightmare How are you going to reliably signal 10 or 100 separate EC2 instances to delete a given cache key? Finally you rule out interesting use cases for in memory caches such as sharing data at high speed across a fleet of instances Let's turn our attention back to ElastiCache and how it fits into your application Memcached vs Redis Amazon ElastiCache currently supports two different in memory key value engines You can choose the engine you prefer when launching an ElastiCache cache cluster: • Memcached —a widely ad opted in memory key store and historically the gold standard of web caching ElastiCache is protocol compliant with Memcached so popular tools that you use today with existing Memcached environments will work seamlessly with the service Memcached is als o multithreaded meaning it makes good use of larger Amazon EC2 instance sizes with multiple cores • Redis —an increasingly popular open source key value store that supports more advanced data structures such as sorted sets hashes and lists Unlike Memcach ed Redis has disk persistence built in meaning that you can use it for longlived data Redis also supports replication which can be used to achieve Multi AZ redundancy similar to Amazon RDS Although both Memcached and Redis appear similar on the surf ace in that they are both in memory key stores they are quite different in practice Because of the replication and persistence features of Redis ElastiCache manages Redis more as a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 4 relational database Redis ElastiCache clusters are managed as stateful entities that include failover similar to how Amazon RDS manages database failover Conversely because Memcached is designed as a pure caching solution with no persistence ElastiCache manages Memcached nodes as a pool that can grow and shrink similar to an Amazon EC2 Auto Scaling group Individual nodes are expendable and ElastiCache provides additional capabilities here such as automatic node replacement and Auto Discovery When deciding between Memcached and Redis here are a few questions to consid er: • Is object caching your primary goal for example to offload your database? If so use Memcached • Are you interested in as simple a caching model as possible? If so use Memcached • Are you planning on running large cache nodes and require multithreaded performance with utilization of multiple cores? If so use Memcached • Do you want the ability to scale your cache horizontally as you grow? If so use Memcached • Does your app need to atomically increment or decrement counters? If so use either Redis or Memcached • Are you looking for more advanced data types such as lists hashes bit arrays HyperLogLogs and sets? If so use Redis • Does sorting and ranking datasets in memory help you such as with leaderboards? If so use Redis • Are publis h and subscribe (pub/sub) capabilities of use to your application? If so use Redis • Is persistence of your key store important? If so use Redis • Do you want to run in multiple AWS Availability Zones (Multi AZ) with failover? If so use Redis • Is geospatial support important to your applications? If so use Redis • Is encryption and compliance to standards such as PCI DSS HIPAA and FedRAMP required for your business? If so use Redis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 5 Although it's tempting to look at Redis as a more evolved Mem cached due to its advanced data types and atomic operations Memcached has a longer track record and the ability to leverage multiple CPU cores Because Memcached and Redis are so different in practice we're going to address them separately in most of thi s paper We will focus on using Memcached as an in memory cache pool and using Redis for advanced datasets such as game leaderboards and activity streams ElastiCache for Memcached The primary goal of caching is typically to offload reads from your datab ase or other primary data source In most apps you have hot spots of data that are regularly queried but only updated periodically Think of the front page of a blog or news site or the top 100 leaderboard in an online game In this type of case your a pp can receive dozens hundreds or even thousands of requests for the same data before it's updated again Having your caching layer handle these queries has several advantages First it's considerably cheaper to add an in memory cache than to scale up t o a larger database cluster Second an in memory cache is also easier to scale out because it's easier to distribute an in memory cache horizontally than a relational database Last a caching layer provides a request buffer in the event of a sudden spik e in usage If your app or game ends up on the front page of Reddit or the App Store it's not unheard of to see a spike that is 10 –100 times your normal application load Even if you autoscale your application instances a 10x request spike will likely m ake your database very unhappy Let's focus on ElastiCache for Memcached first because it is the best fit for a caching focused solution We'll revisit Redis later in the paper and weigh its advantages and disadvantages Architecture with ElastiCache for Memcached When you deploy an ElastiCache Memcached cluster it sits in your application as a separate tier alongside your database As mentioned previously Amazon ElastiCache does not directly communicate with your database tier or indeed have any parti cular knowledge of your database A simplified deployment for a web application looks similar to the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 6 A simplified deployment for a web application In this architecture diagram the Amazon EC2 application instances are in an Auto Scalin g group located behind a load balancer using Elastic Load Balancing which distributes requests among the instances As requests come into a given EC2 instance that EC2 instance is responsible for communicating with ElastiCache and the database tier For development purposes you can begin with a single ElastiCache node to test your application and then scale to additional cluster nodes by modifying the ElastiCache cluster As you add additional cache nodes the EC2 application instances are able to dist ribute cache keys across multiple ElastiCache nodes The most common This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 7 practice is to use client side sharding to distribute keys across cache nodes discuss ed later in this paper EC2 application instances in an Auto Scaling group When you launch an ElastiCache cluster you can choose the Availability Zones where the cluster lives For best performance you should configure your cluster to use the same Availability Zones as your application servers To launch an ElastiCache cluster in a specific Availability Zone make sure to specify the Preferred Zone(s) option during cache cluster creation The Availability Zones that you specify will be where ElastiCache will launch your cache nodes AWS recommend s that you select Spread Nodes Across Zones which tells ElastiCache to distribute cache nodes across these zones This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Perfor mance at Scale with Amazon ElastiCache Page 8 as evenly as possible This distribution will mitigate the impact of an Availability Zone disruption on your ElastiCache nodes The trade off is that some of the requests from your application to ElastiCache will go to a node in a different Availability Zone meaning latency will be slightly higher For more details see Creating a Clu ster in the Amazon ElastiCache for Memcached User Guide As mentioned at the outset ElastiCache can be coupled with a wide variety of databases Here is an example architecture that uses Amazon DynamoDB instead of Amazon RDS and MySQL: Example archit ecture using Amazon DynamoDB instead of Amazon RDS and MySQL This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 9 This combination of DynamoDB and ElastiCache is very popular with mobile and game companies because DynamoDB allows for higher write throughput at lower cost than traditional relational database s In addition DynamoDB uses a key value access pattern similar to ElastiCache which also simplifies the programming model Instead of using relational SQL for the primary database but then key value patterns for the cache both the primary database and cache can be programmed similarly In this architecture pattern DynamoDB remains the source of truth for data but application reads are offloaded to ElastiCache for a speed boost Selecting the Right Cache Node Size ElastiCache supports a variety of cache node types We recommend choosing a cache node from the M5 or R5 families because the newest node types support the latest generation CPUs and networking capabilities These instance families can deliver up to 25 Gbps of aggregate network bandwidth with enhanced networking based on the Elastic Network Adapter (ENA) and over 600 GiB of memory The R5 node types provide 5% more memory per vCPU and a 10% price per GiB improvement over R4 node types In addition R5 node types deliver a ~20% CPU performance improvement over R4 node types If you don’t know how much capacity you need AWS recommend s starting with one cachem5large node Use the ElastiCache metrics published to CloudWatch to monitor memory usage CPU utilization and the cache hit rate If your cluster does not have the desired hit rate or you notice that keys are being evic ted too often choose a nother node type with more CPU and memory capacity For production and large workloads the R5 nodes typically provide the best performance and memory cost value You can get an approximate estimate of the amount of cache memory you' ll need by multiplying the size of items you want to cache by the number of items you want to keep cached at once Unfortunately calculating the size of your cached items can be trickier than it sounds You can arrive at a slight overestimate by serializi ng your cached items and then counting characters Here's an example that flattens a Ruby object to JSON counts the number of characters and then multiplies by 2 because there are typically 2 bytes per character: irb(main):010:0> user = Userfind(4) irb(main):011:0> use/to_jsonsize * 2 => 580 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 10 In addition to the size of your data Memcached adds approximately 50 –60 bytes of internal bookkeeping data to each element The cache key also consumes space up to 250 characters at 2 bytes each In this example it's probably safest to overestimate a little and guess 1 –2 KB per cached object Keep in mind that this approach is just for illustration purposes Your cached objects can be much larger if you are caching rendered page fragments or if you use a serializa tion library that expands strings Because Amazon ElastiCache is a pay asyougo service make your best guess at the node instance size and then adjust after getting some real world data Make sure that your application is set up for consistent hashing which will enable you to add additional Memcached nodes to scale your in memory layer horizontally For additional tips see Choosing Your Node Size in the Amazon ElastiCache for Memcached User Guide Security Groups and VPC Like other AWS services ElastiCache supports security groups You can use security groups to define rules that limit access to your instances based on IP address and port ElastiCache supports both subnet security groups in Amazon Virtual Private Cloud (Amazon VPC) and classic Amazon EC2 se curity groups We strongly recommend that you deploy ElastiCache and your application in Amazon VPC unless you have a specific need otherwise (such as for an existing application) Amazon VPC offers several advantages including fine grained access rules and control over private IP addressing For an overview of how ElastiCache integrates with Amazon VPC see Understanding ElastiCache and Amazon VPCs in the Amazon Elasti Cache for Memcached User Guide When launching your ElastiCache cluster in a VPC launch it in a private subnet with no public connectivity for best security Memcached does not have any serious authentication or encryption capabilities but Redis does sup port encryption Following is a simplified version of the previous architecture diagram that includes an example VPC subnet design This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 11 Example VPC subnet design To keep your cache nodes as secure as possible only allow access to your cache cluster from you r application tier as shown preceding ElastiCache does not need connectivity to or from your database tier because your database does not directly interact with ElastiCache Only application instances that are making calls to your cache cluster need con nectivity to it The way ElastiCache manages connectivity in Amazon VPC is through standard VPC subnets and security groups To securely launch an ElastiCache cluster in Amazon VPC follow these steps: 1 Create VPC private subnet(s) that will house your Elas tiCache cluster in the same VPC as the rest of your application A given VPC subnet maps to a single Availability Zone Given this mapping create a private VPC subnet for each Availability Zone where you have application instances Alternatively you can reuse another private VPC subnet that you already have For more information refer to VPCs and Subnets in the Amazon Virtual Private Cloud User Guide 2 Create a VPC sec urity group for your new cache cluster Make sure it is also in the same VPC as the preceding subnet For more details see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide 3 Create a single access rule for this security group allowing inbound access on port 11211 for Memcached or on port 6379 for Redis 4 Create an ElastiCache subnet group that contains the VPC private subnets that you created in step 1 This subnet group is how ElastiCache knows which VPC subnets to use when launching the cluster For instructions see Creating a Cache Subnet Group in the Amazon ElastiCache for Memcached User Guide 5 When you launch your ElastiCache cluster make sure to place it in the correct VPC and choose the correct ElastiCache subnet group For instructions see Creating a Cluster in the Amazon ElastiCache for Memcached User Guide A correct VPC security group for your cache cluster should look like the following Notice the single inbound rule allowing access to the cluster from the application tier: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 12 VPC security group for your cache cluster To test connectivity from an application instance to your cache cluster in VPC you can use netcat a Linux command line utility Choose one of your cac he cluster nodes and attempt to connect to the node on either port 11211 (Memcached) or port 6379 (Redis): $ nc z w5 mycache2bz2vq55001usw2cache amazonawscom 11211 $ echo $? 0 If the connection is successful netcat will exit with status 0 If netcat appears to hang or exits with a nonzero status check your VPC security group and subnet settings Caching Design Patterns With a ElastiCache cluster deployed let's now dive into how to best apply caching in your appli cation How to Apply Caching With a ElastiCache cluster deployed let's now dive into how to best apply caching in your application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 13 • Is it safe to use a cached value? The same piece of data can have different consistency requirements in different contexts For example during online checkout you need the authoritative price of an item so caching might not be appropriate On other pages however the price might be a few minutes out of date without a negative impact on users • Is caching effective for that data? Some applications generate access patterns that are not suitable for caching —for example sweeping through the key space of a large dataset that is changing frequently In this case keeping the cache up todate could offset any advantage caching cou ld offer • Is the data structured well for caching? Simply caching a database record can often be enough to offer significant performance advantages However other times data is best cached in a format that combines multiple records together Because cach es are simple key value stores you might also need to cache a data record in multiple different formats so you can access it by different attributes in the record You don’t need to make all of these decisions up front As you expand your usage of cachin g keep these guidelines in mind when deciding whether to cache a given piece of data Consistent Hashing (Sharding) In order to make use of multiple ElastiCache nodes you need a way to efficiently spread your cache keys across your cache nodes The naïve approach to distributing cache keys often found in blogs looks like this: cache_node_list = [ ’mycache2az2vq550001usw2cacheamazonawscom:11211’ ’my cache2az2vq550002usw2cacheamazonawscom:11211’ ] This approach applies a hash function (suc h as CRC32) to the key to add some randomization and then uses a math modulo of the number of cache nodes to distribute the key to a random node in the list This approach is easy to understand and most importantly for any key hashing scheme it is determ inistic in that the same cache key always maps to the same cache node Unfortunately this particular approach suffers from a fatal flaw due to the way that modulo works As the number of cache nodes scales up most hash keys will get This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 14 remapped to new nodes with empty caches as a side effect of using modulo You can calculate the number of keys that would be remapped to a new cache node by dividing the old node count by the new node count For example scaling from 1 to 2 nodes remaps half (½) of your cache keys; scaling from 3 to 4 nodes remaps three quarters (¾) of your keys; and scaling from 9 to 10 nodes remaps 90 percent of your keys to empty caches This approach is bad for obvious reasons Think of the scenario where you're scaling rapidly due to a s pike in demand Just at the point when your application is getting overwhelmed you add an additional cache node to help alleviate the load Instead you effectively wipe 90 percent of your cache causing a huge spike of requests to your database Your dash board goes red and you start getting those alerts that nobody wants to get Luckily there is a well understood solution to this dilemma known as consistent hashing The theory behind consistent hashing is to create an internal hash ring with a prealloca ted number of partitions that can hold hash keys As cache nodes are added and removed they are slotted into positions on that ring The following illustration taken from Benjamin Erb’s thesis on Current Programming for Scalable Web Architectures illustrates consistent hashing graphically Consistent hashing The downside to consistent hashing is that there's quite a bit of math involved —at least it's more complicated than a simple modulo Basically you pre allocate a set of random integers and assign cache nodes to those random integers Then rather than using modulo you find the closest integer in the ring for a given cache key and use the cache node associated with that integer A concise yet complete explanation can be found in the article Consistent Hashing by Tom White This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 15 Luckily many modern client libraries include consistent hashing Although you shouldn't need to write your own consistent hashing solution from scratch it's important that you are aware of consistent ha shing so that you can ensure it's enabled in your client For many libraries it's still not the default behavior even when supported by the library Client Libraries Mature Memcached client libraries exist for all popular programming languages Any of the following Memcached libraries will work with Amazon ElastiCache: Language Memcached Library Ruby Dalli Dalli::ElastiCache Python Memcache Ring django elasticache python memcached pylibmc Nodejs node memcached PHP ElastiCache Cluster Cli ent memcached Java ElastiCache Cluster Client spymemcached C#/NET ElastiCache Cluster Client Enyim Memcached For Memcached with Java NET or PHP AWS recommend s using ElastiCache Clients with Auto Discovery because it supports Auto Discovery of new ElastiCache nodes as they are added to the cache cluster For Java this library is a simple wrapper around the popular spymemcached library that ad ds Auto Discovery support For PHP it is a wrapper around the built in Memcached PHP library For NET it is a wrapper around Enyim Memcached Auto Discovery only works for Memcached not Redis When ElastiCache repairs or replaces a cache node the Doma in Name Service (DNS) name of the cache node will remain the same meaning your application doesn't need to use Auto Discovery to deal with common failures You only need Auto Discovery support if you dynamically scale the size of your cache cluster on the fly while your application is running Dynamic scaling is only required if your application load fluctuates significantly For more details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 16 see Automatically Ide ntify Nodes in your Memcached Cluster in the Amazon ElastiCache for Memcached User Guide As mentioned you should choose a client library that includes native support for consistent hashing Many of the libraries in the preceding table support consiste nt hashing but we recommend that you check the documentation because this support can change over time Also you might need to enable consistent hashing by setting an option in the client library In PHP for example you need to explicitly set Memcached::OPT_LIBKETAMA_COMPATIBLE to true to enable consistent hashing: This code snippet tells PHP to use consistent hashing by using libketama Otherwise the default in PHP is to use modulo which suffers from the drawbacks outlined preceding Following are some common and effect ive caching strategies If you've done a good amount of caching before some of this might be familiar Be Lazy Lazy caching also called lazy population or cache aside is the most prevalent form of caching Laziness should serve as the foundation of any good caching strategy The basic idea is to populate the cache only when an object is actually requested by the application The overall application flow goes like this: $cache_nodes = array( array(’my cache 2az2vq550001usw2cacheamazonawscom’ 11211) array(’my cache 2az2vq550002usw2cacheamazonawscom’ 11211) ); $memcached = new Memcached(); $memcached >setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE true); $memcached >addServers($cache_nodes); This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 17 1 Your app receives a query for data for example the top 10 most recent news stories 2 Your app checks the cache to see if the object is in cache 3 If so (a cache hit) the cached object is returned and the call flow ends 4 If not (a cache miss) then the database is queried for the object The cache is populated and the object is returned This approach has several advantages over other methods: • The cache only contains objects that the application actually requests which helps keep the cache size manageable New objects are only added to the cache as needed You can then manage your cache me mory passively by simply letting Memcached automatically evict (delete) the least accessed keys as your cache fills up which it does by default • As new cache nodes come online for example as your application scales up the lazy population method will au tomatically add objects to the new cache nodes when the application first requests them • Cache expiration which we will cover in depth later is easily handled by simply deleting the cached object A new object will be fetched from the database the next t ime it is requested • Lazy caching is widely understood and many web and app frameworks include support out of the box Here is an example of lazy caching in Python pseudocode: # Python def get_user(user_id): # Check the cache record = cacheget( user_id) if record is None: # Run a DB query record = dbquery("select * from users where id = ?"user_id) # Populate the cache cacheset(user_id record) return record # App code user = get_user(17) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 18 You can find libraries in many popular programming fram eworks that encapsulate this pattern But regardless of programming language the overall approach is the same Apply a lazy caching strategy anywhere in your app lication where you have data that is going to be read often but written infrequently In a ty pical web or mobile app for example a user's profile rarely changes but is accessed throughout the app A person might only update his or her profile a few times a year but the profile might be accessed dozens or hundreds of times a day depending on t he user Because Memcached will automatically evict the less frequently used cache keys to free up memory you can apply lazy caching liberally with little downside Write On Through In a write through cache the cache is updated in real time when the data base is updated So if a user updates his or her profile the updated profile is also pushed into the cache You can think of this as being proactive to avoid unnecessary cache misses in the case that you have data that you absolutely know is going to be accessed A good example is any type of aggregate such as a top 100 game leaderboard or the top 10 most popular news stories or even recommendations Because this data is typically updated by a specific piece of application or background job code it's straightforward to update the cache as well The write through pattern is also easy to demonstrate in pseudocode: # Python def save_user(user_id values): # Save to DB record = dbquery("update users where id = ?" user_id values) # Push into cache cacheset(user_id record) return record # App code user = save_user(17 {"name": "Nate Dogg"}) This approach has certain advantages over lazy population: • It avoids cache misses which can help the application perform better and feel snappier This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 19 • It shifts any application delay to the user updating data which maps better to user expectations By contrast a series of cache misses can give a random user the impression that your app is just slow • It simplifies cache expiration The cache i s always up todate However write through caching also has some disadvantages: • The cache can be filled with unnecessary objects that aren't actually being accessed Not only could this consume extra memory but unused items can evict more useful items ou t of the cache • It can result in a lot of cache churn if certain records are updated repeatedly • When (not if) cache nodes fail those objects will no longer be in the cache You need some way to repopulate the cache of missing objects for example by lazy population As might be obvious you can combine lazy caching with write through caching to help address these issues because they are associated with opposite sides of the data flow Lazy caching catches cache misses on reads and write through caching populates data on writes so the two approaches complement each other For this reason it's often best to think of lazy caching as a foundation that you can use throughout your app and write through caching as a targeted optimization that you apply to s pecific situations Expiration Date Cache expiration can become complex quickly In our previous examples we were only operating on a single user record In a real app a given page or screen often caches a whole bunch of different stuff at once —profile d ata top news stories recommendations comments and so forth all of which are being updated by different methods Unfortunately there is no silver bullet for this problem and cache expiration is a whole arm of computer science But there are a few sim ple strategies that you can use: • Always apply a time to live (TTL) to all of your cache keys except those you are updating by write through caching You can use a long time say hours or even days This approach catches application bugs where you forget to update or delete a given cache key when updating the underlying record Eventually the cache key will auto expire and get refreshed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 20 • For rapidly changing data such as comments leaderboards or activity streams rather than adding write through caching or complex expiration logic just set a short TTL of a few seconds If you have a database query that is getting hammered in production it's just a few lines of code to add a cache key with a 5 second TTL around the query This code can keep your applica tion up and running while you evaluate more elegant solutions • A newer pattern Russian doll caching has come out of work done by the Ruby on Rails team In this pattern nested records are managed with their own cache keys and then the top level resourc e is a collection of those cache keys Say that you have a news webpage that contains users stories and comments In this approach each of those is its own cache key and the page queries each of those keys respectively • When in doubt just delete a cac he key if you're not sure whether it's affected by a given database update or not Your lazy caching foundation will refresh the key when needed In the meantime your database will be no worse off than it was without Memcached For a good overview of cach e expiration and Russian doll caching see the blog post The performance impact of "Russian doll" caching in the Basecamp Signal vs Noise blog The Thunderi ng Herd Also known as dog piling the thundering herd effect is what happens when many different application processes simultaneously request a cache key get a cache miss and then each hits the same database query in parallel The more expensive this que ry is the bigger impact it has on the database If the query involved is a top 10 query that requires ranking a large dataset the impact can be a significant hit One problem with adding TTLs to all of your cache keys is that it can exacerbate this probl em For example let's say millions of people are following a popular user on your site That user hasn't updated his profile or published any new messages yet his profile cache still expires due to a TTL Your database might suddenly be swamped with a series of identical queries TTLs aside this effect is also common when adding a new cache node because the new cache node's memory is empty In both cases the solution is to prewarm the cache by following these steps: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 21 1 Write a script that performs the sam e requests that your application will If it's a web app this script can be a shell script that hits a set of URLs 2 If your app is set up for lazy caching cache misses will result in cache keys being populated and the new cache node will fill up 3 When y ou add new cache nodes run your script before you attach the new node to your application Because your application needs to be reconfigured to add a new node to the consistent hashing ring insert this script as a step before triggering the app reconfigu ration 4 If you anticipate adding and removing cache nodes on a regular basis prewarming can be automated by triggering the script to run whenever your app receives a cluster reconfiguration event through Amazon Simple Notification Service (Amazon SNS) Finally there is one last subtle side effect of using TTLs everywhere If you use the same TTL length (say 60 minutes) consistently then many of your cache keys might expire within the same time window even after prewarming your cache One strateg y that's easy to implement is to add some randomness to your TTL: ttl = 3600 + (rand() * 120) /* +/ 2 minutes */ The good news is that only sites at large scale typically have to worry about this level of scaling problem It's good to be aware of but it' s also a good problem to have Cache (Almost) Everything Finally it might seem as if you should only cache your heavily hit database queries and expensive calculations but that other parts of your app might not benefit from caching In practice in memor y caching is widely useful because it is much faster to retrieve a flat cache key from memory than to perform even the most highly optimized database query or remote API call Just keep in mind that cached data is stale data by definition meaning there m ay be cases where it’s not appropriate such as accessing an item’s price during online checkout You can monitor statistics like cache misses to determine whether your cache is effective which we will cover in Monitoring and Tuning later in the paper This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 22 ElastiCache for Redis So far we've been talking about ElastiCache for Memcached as a passive component in our application —a big slab of memory in the cloud Choosing Redis as our engine can unlock more interesting possibilities for our application due to its higher level data structures such as lists hashes sets and sorted sets Deploying Redis makes use of familiar concepts such as clusters and nodes However Redis has a few important differences compared with Memcached: • Redis data structures cannot be horizontally sharded As a result Redis ElastiCache clusters a re always a single node rather than the multiple nodes we saw with Memcached • Redis supports replication both for high availability and to separate read workloads from write workloads A given ElastiCache for Redis primary node can have one or more repli ca nodes A Redis primary node can handle both reads and writes from the app Redis replica nodes can only handle reads similar to Amazon RDS Read Replicas • Because Redis supports replication you can also fail over from the primary node to a replica in t he event of failure You can configure ElastiCache for Redis to automatically fail over by using the Multi AZ feature • Redis supports persistence including backup and recovery However because Redis replication is asynchronous you cannot completely guar d against data loss in the event of a failure We will go into detail on this topic in our discussion of Multi AZ Architecture with ElastiCache for Redis As with Memcached when you deploy an ElastiCache for Redis cluster it is an additional tier in your app Unlike Memcached ElastiCache clusters for Redis only contain a single primary node After you create the primary node you can configure one or more replica nodes and attach them to the primary Redis node An ElastiCache for Redis replication group consists of a primary and up to five read replicas Redis asynchronously replicates the data from the primary to the read replicas Because Redis supports persistence it is technically possible to use Redis as your only data store In practice customers find that a managed database such as Amazon DynamoDB or Amazon RDS is a better fit for most use cases of long term data storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 23 Amazon Elasti Cache for Redis ElastiCache for Redis has the concept of a primary endpoint which is a DNS name that always poi nts to the current Redis primary node If a failover event occurs the DNS entry will be updated to point to the new Redis primary node To take advantage of this functionality make sure to configure your Redis client so that it uses the primary endpoint DNS name to access your Redis cluster Keep in mind that the number of Redis replicas you attach will affect the performance of the primary node Resist the urge to spin up lots of replicas just for durability One or two replicas in a different Availabili ty Zone are sufficient for availability When scaling read throughput monitor your application's performance and add replicas as needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 24 Be sure to monitor your ElastiCache cluster's performance as you add replica nodes For more details see Monitoring and Tuning later in this paper Distributing Reads and Writes Using read replicas with Redis you can separate your read and write workloads This separa tion lets you scale reads by adding additional replicas as your application grows In this pattern you configure your application to send writes to the primary endpoint Then you read from one of the replicas as shown in the following diagram With this approach you can scale your read and write loads independently so your primary node only has to deal with writes Distributing reads and writes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 25 The main caveat to this approach is that reads can return data that is slightly out of date compared to the primary node because Redis replication is asynchronous For example if you have a global counter of "total games played" that is being continuously incremented (a good fit for Redis) your master might show 51782 However a read from a replica migh t only return 51775 In many cases this is just fine But if the counter is a basis for a crucial application state such as the number of seconds remaining to vote on the most popular pop singer this approach won't work When deciding whether data can be read from a replica here are a few questions to consider: • Is the value being used only for display purposes? If so being slightly out of date is probably okay • Is the value a cached value for example a page fragment? If so again being slightly out o f date is likely fine • Is the value being used on a screen where the user might have just edited it? In this case showing an old value might look like an application bug • Is the value being used for application logic? If so using an old value can be risky • Are multiple processes using the value simultaneously such as a lock or queue? If so the value needs to be up todate and needs to be read from the primary node In order to split reads and writes you will need to create two separate Redis connection handles in your application: one pointing to the primary node and one pointing to the read replica(s) Configure your application to write to the DNS primary endpoint and then read from the other Redis nodes Multi AZ with Auto Failover During certain t ypes of planned maintenance or in the unlikely event of ElastiCache node failure or Availability Zone failure Amazon ElastiCache can be configured to automatically detect the failure of the primary node select a read replica and promote it to become th e new primary ElastiCache auto failover will then update the DNS primary endpoint with the IP address of the promoted read replica If your application is writing to the primary node endpoint as recommended earlier no application change will be needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 26 Depending on how in sync the promoted read replica is with the primary node the failover process can take several minutes First ElastiCache needs to detect the failover then suspend writes to the primary node and finally complete the failover to the replica During this time your application cannot write to the Redis ElastiCache cluster Architecting your application to limit the impact of these types of failover events will ensure greater overall availability Unless you have a specific need otherwise all production deployments should use Multi AZ with auto failover Keep in mind that Redis replication is asynchronous meaning if a failover occurs the read replica that is selected might be slightly behind the master Bottom line: Some data loss might occur if you have rapidly changing data This effect is currently a limitation of Redis replication itself If you have crucial data that cannot be lost (for example transactional or purchase data) we recommend that you also store that in a durable data base such as Amazon DynamoDB or Amazon RDS Sharding with Redis Redis has two categories of data structures: simple keys and counters and multidimensional sets lists and hashes The bad news is the second category cannot be sharded horizontally But the good news is that simple keys and counters can In the simplest case you can treat a single Redis node just like a single Memcached node Just like you might spin up multiple Memcached nodes you can spin up multiple Redis clusters and each Redis cluste r is responsible for part of the sharded dataset This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 27 Sharding with Redis In your application you'll then need to configure the Redis client to shard between those two clusters Here is an example from the Jedis Sharded Java Client: List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>(); shardsadd(new JedisShardInfo("redis cluster1" 6379)); shardsadd(new JedisShardInfo("redis cluster2" 6379)); This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon Elast iCache Page 28 ShardedJedisPool pool = new ShardedJedisPool(shards); ShardedJedis jedis = poolgetResource(); You can also combine horizontal sharding with split reads and writes In this setup you have two or more Redis clusters each of which stores part of the key space You configure your application with two separate sets of Redis handles a write handle that points to the sharded masters and a read handle that points to the sharded replicas Following is an example architecture this time with Amazon DynamoDB rather than MySQL just to illustrate that you can use either one: Example arc hitecture with DynamoDB For the purpose of simplification the preceding diagram shows replicas in the same Availability Zone as the primary node In practice you should place the replicas in a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 29 different Availability Zone From an application perspective continuing with our Java example you configure two Redis connection pools as follows: List<JedisShardInfo> masters = new ArrayList<JedisShardInfo>(); mastersadd(new JedisShardInfo("redis masterA" 6379)); mastersadd(new JedisShardInfo("redis masterB" 6379)); ShardedJedisPool write_pool = new ShardedJedisPool(masters); ShardedJedis write_jedis = write_poolgetResource(); List<JedisShardInfo> replicas = new ArrayList<JedisShardInfo>(); replicasadd(new JedisShardInfo("redis replicaA" 6379)); replicasa dd(new JedisShardInfo("redis replicaB" 6379)); ShardedJedisPool read_pool = new ShardedJedisPool(replicas); ShardedJedis read_jedis = read_poolgetResource(); In designing your application you need to make decisions as to whether a given value can be re ad from the replica pool which might be slightly out of date or from the primary write node Be aware that reading from the primary node will ultimately limit the throughput of your entire Redis layer because it takes I/O away from writes Using multipl e clusters in this fashion is the most advanced configuration of Redis possible In practice it is overkill for most applications However if you design your application so that it can leverage a split read/write Redis layer you can apply this design in the future if your application grows to the scale where it is needed Advanced Datasets with Redis Let's briefly look at some use cases that ElastiCache for Redis can support Game Leaderboards If you've played online games you're probably familiar with top 10 leaderboards What might not be obvious is that calculating a top n leaderboard in near real time is actually quite complex An online game can easily have thousands of people playing concurrently each with stats that are changing con tinuously Re sorting these users and reassigning a numeric position is computationally expensive Sorted sets are particularly interesting here because they simultaneously guarantee both the uniqueness and ordering of elements Redis sorted set commands all start with This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 30 Z When an element is inserted in a Redis sorted set it is reranked in real time and assigned a numeric position Here is a complete game leaderboard example in Redis: ZADD “leaderboard” 556 “Andy” ZADD “leaderboard” 819 “Barry” ZADD “leaderboard” 105 “Carl” ZADD “leaderboard” 1312 “Derek” ZREVRANGE “leaderboard” 0 1 1) “Derek” 2) “Barry” 3) “Andy” 4) “Carl” ZREVRANK “leaderboard” “Barry” 2 When a player's score is updated the Redis command ZADD overwrites the existing value with the new score The list is instantly re sorted and the player receives a new rank For more information refer to the Redis documentation on ZADD ZRANGE and ZRANK Recommendation Engines Similarly calculating recommendations for users based on other items they've liked requi res very fast access to a large dataset Some algorithms such as Slope One are simple and effective but require in memory access to every item ever rated by anyone in the system Even if this data i s kept in a relational database it has to be loaded in memory somewhere to run the algorithm Redis data structures are a great fit for recommendation data You can use Redis counters used to increment or decrement the number of likes or dislikes for a gi ven item You can use Redis hashes to maintain a list of everyone who has liked or disliked that item which is the type of data that Slope One requires Here is a brief example of storing item likes and dislikes: INCR "item:38923:likes" HSET "item:38923:r atings" "Susan" 1 INCR "item:38923:dislikes" HSET "item:38923:ratings" "Tommy" 1 From this simple data not only can we use Slope One or Jaccardian similarity to recommend similar items but we can use the same counters to display likes and dislikes in th e app itself In fact a number of open source projects use Redis in exactly this manner such as Recommendify and Recommendable In addi tion because Redis supports persistence this data can live solely in Redis This placement eliminates the need for any data loading process and also offloads an intensive process from your main database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amaz on ElastiCache Page 31 Chat and Messaging Redis provides a lightweight p ub/sub mechanism that is well suited to simple chat and messaging needs Use cases include in app messaging web chat windows online game invites and chat and real time comment streams (such as you might see during a live streaming event) Two basic Redi s commands are involved PUBLISH and SUBSCRIBE: SUBSCRIBE "chat:114" PUBLISH "chat:114" "Hello all" ["message" "chat:114" "Hello all"] UNSUBSCRIBE "chat:114" Unlike other Redis data structures pub/sub messaging doesn't get persisted to disk Redis pub/sub messages are not written as part of the RDB or AOF backup files that Redis creates If you want to save these pub/sub messages you will need to add them to a Redis data structure such as a list For more details see Using Pub/Sub for Asynchronous Communication in the Redis Cookbook Also because Redis pub/sub is not persistent you can lose data if a cache node fails If you're looking for a reliable topic based messaging system consider evaluating Amazon SNS Queues Although we offer a managed queue service in the form of Amazon Simple Queue Service (Amazon SQS) and we encourage customers to us e it you can also use Redis data structures to build queuing solutions The Redis documentation for RPOPLPUSH covers two well documented queuing patterns In these patterns Redis lists are used to hold items in a queue When a process takes an item from the queue to work on it the item is pushed onto an "in progress" queue and then deleted when the work is done Open source solutions such as Resque use R edis as a queue; GitHub uses Resque Redis does have certain advantages over other queue options such as very fast speed once and only once delivery and guaranteed message ordering However pay careful attention to ElastiCache for Redis backup and reco very options (which we will cover shortly) if you intend to use Redis as a queue If a Redis node terminates and you have not properly configured its persistence options you can lose the data for the items in This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 32 your queue Essentially you need to view your queue as a type of database and treat it appropriately rather than as a disposable cache Client Libraries and Consistent Hashing As with Memcached you can find Redis client libraries for the currently popular programming languages Any of these will w ork with ElastiCache for Redis: Language Redis Library Ruby redis rb Redis ::Objects Python redispy Nodejs node_redis ioredis PHP phpredis Predis Java Jedis Lettuce Redisson C#/NET ServiceStackRedis StackExchangeRedis GO goredis/redis Radix Redigo Unlike with Memcached it is uncommon for Redis libraries to support consistent hashing Redis libraries rarely support consistent hashing because the advanced data types that we discussed preceding cannot simply be horizontally sharded across multiple Redis nodes This point leads to another very important one: Redis as a technology cannot be horizontally scaled easily Redis can only scale up to a larger node size because its data structures must reside in a single memory image in order to perform properly Note that Redis Cluster was first made available in Redis version 30 It aims to provide scale out capability with certain data types Redis Cluster currently only supports a subset of Redis functionality and has some important caveats about possible data loss For more details see the Redis Cluster Specification This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 33 Monitoring and Tuning Before we wrap up let's spend some time talking about monitoring and performance tuning Monitoring Cache Efficiency To begin see the Monitoring Use with CloudWatch topic for Redis and Memcached as well as the Which Metrics Should I Monitor? topic for Redis and Memcached in the Amazon ElastiCache User Guide Both topics are excellent resources for understanding how to measure the health of your ElastiCache cluster using the metrics that ElastiCache publishes to Amazon CloudWatch Most importantly watch CPU us age A consistently high CPU usage indicates that a node is overtaxed either by too many concurrent requests or by performing dataset operations in the case of Redis For Redis ElastiCache provides two different types of metrics for monitoring CPU usage : CPUUtilization and EngineCPUUtilization Because Redis is single threaded you need to multiply the CPU percentage by the number of cores to get an accurate measure of CPUUtilization For s maller node types with one or two vCPUs use the CPUUtilization metric to monitor your workload For larger node types with four or more vCPUs we recommend monitor ing the EngineCPUUtilization metric which reports the percentage of usage on the Redis engi ne core After Redis maxes out a single CPU core that node is fully utilized and further scaling is needed If your main workload is from read requests add more replicas to distribute the read workloads across the replicas and reader endpoints If your main workload is from write requests add more shards to distribute the write workload across more primary nodes In addition to CPU here is some additional guidance for monitoring cache memory utilization Each of these metrics is available in CloudWatc h for your ElastiCache cluster: • Evictions —both Memcached and Redis manage cache memory internally and when memory starts to fill up they evict (delete) unused cache keys to free space A small number of evictions shouldn't alarm you but a large number means that your cache is running out of space This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale w ith Amazon ElastiCache Page 34 • CacheMisses —the number of times a key was requested but not found in the cache This number can be fairly large if you're using lazy population as your main strategy If this number is remaining steady it's lik ely nothing to worry about However a large number of cache misses combined with a large eviction number can indicate that your cache is thrashing due to lack of memory • BytesUsedForCacheItems —this value is the actual amount of cache memory that Memcached or Redis is using Both Memcached and Redis attempt to allocate as much system memory as possible even if it's not used by actual cache keys Thus monitoring the system memory usage on a cache node doesn't tell you how full your cache actually is • SwapUsage —in normal usage neither Memcached nor Redis should be performing swaps • Currconnections —this is a cache engine metric representing the number of clients connected to the engine We recommend that you determine your own alarm threshold for this metric based on your application needs An increasing number of CurrConnections might indicate a problem with your application — you’ll need to investigate the application ’s behavior to address this issue A well tuned cache node will show the number of cache byte s used to be almost equal to the maxmemory parameter in Redis or the max_cache_memory parameter in Memcached In steady state most cache counters will increase with cache hits increasing faster than misses You also will probably see a low number of evi ctions However a rising number of evictions indicates that cache keys are getting pushed out of memory which means you can benefit from larger cache nodes with more memory The one exception to the evictions rule is if you follow a strict definition of Russian doll caching which says that you should never cause cache items to expire but instead let Memcached and Redis evict unused keys as needed If you follow this approach keep a close watch on cache misses and bytes used to detect potential problems Watching for Hot Spots In general i f you are using consistent hashing to distribute cache keys across your cache nodes your access patterns should be fairly even across nod es However you still need to watch out for hot spots which are nodes in your cache that receive higher load than other nodes This pattern is caused by hot keys which are cache keys that are accessed more frequently than others Think of a social websi te where you have some users that might be 10000 times more popular than an average user That user's This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 35 cache keys will be accessed much more often which can put an uneven load onto the cache nodes that house that user's keys If you see uneven CPU usage among your cache nodes you might have a hot spot This pattern often appears as one cache node having a significantly higher operation count than other nodes One way to confirm this is by keeping a counter in your application of your cache key gets and p uts You can push these as custom metrics into CloudWatch or another monitoring service Don't do this unless you suspect a hot spot however because logging every key access will decrease the overall performance of your application In the most common c ase a few hot keys will not necessarily create any significant hot spot issues If you have a few hot keys on each of your cache nodes then those hot keys are themselves evenly distributed and are producing an even load on your cache nodes If you have three cache nodes and each of them has a few hot keys then you can continue sizing your cache cluster as if those hot keys did not exist In practice even a well designed application will have some degree of unevenness in cache key access In extreme cas es a single hot cache key can create a hot spot that overwhelms a single cache node In this case having good metrics about your cache especially your most popular cache keys is crucial to designing a solution One solution is to create a mapping table that remaps very hot keys to a separate set of cache nodes Although this approach provides a quick fix you will still face the challenge of scaling those new cache nodes Another solution is to add a secondary layer of smaller caches in front of your ma in nodes to act as a buffer This approach gives you more flexibility but introduces additional latency into your caching tier The good news is that these concerns only hit applications of a significant scale We recommend being aware of this potential issue and monitoring for it but not spending time trying to engineer around it up front Hot spots are a fast moving area of computer science research and there is no one sizefitsall solution As always our team of Solutions Architects is available to work with you to address these issues if you encounter them For more research on this topic refer to papers such as Relieving Hot Spots on the World Wide Web and Characterizing Load Imbalance in Real World Networked Caches Memcached Memory Optimization Memcached uses a slab allocator which means that it allocates memory in fixed chunks and then manag es those chunks internally Using this approach Memcached This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 36 can be more efficient and predictable in its memory access patterns than if it used the system malloc() The downside of the Memcached slab allocator is that memory chunks are rigidly allocated onc e and cannot be changed later This approach means that if you choose the wrong number of the wrong size slabs you might run out of Memcached chunks while still having plenty of system memory available When you launch an ElastiCache cluster the max_cache_memory parameter is set for you automatically along with several other parameters For a list of default values see Memcached Specific Parameters in the Amazon ElastiCache for Memcached User Guide The key parameters to keep in mind are chunk_size and chunk_size_growth_factor which work together to control how memory chunks are allocated Redis Memory Optimization Redis has a good write up on memory optimization that can come in handy for advanced use cases Redis exposes a number of Redis configuration variables that will affect how Redis balances CPU and memory for a given dataset These directives can be used with ElastiCache for Redis as well Redis Backup and Restore Redis clusters support persistence by using backup and restore When Redis backup and restore is enabled ElastiCache can automatically take snapshots of your Redis cluste r and save them to Amazon Simple Storage Service (Amazon S3) The Amazon ElastiCache User Guide includes excellent coverage of this function in the topic ElastiCache for Redis Backup and Restore Because of the way Redis backups are implemented in the Redis engine itself you need to have more memory available that your dataset consumes This requirement is because Redis forks a background process that writes the backup data To do so it makes a copy of your data using Linux copy onwrite semantics If your data is changing rapidly this approach means that those data segments will be copied consuming additional memory For more details refer to Amazon ElastiCache Backup Best Practices For production use we strongly recommend that you always enable Redis backups and retain them for a minimum of 7 days In practice retaining them for 14 or 30 days will provide better safety in the event of an applicat ion bug that ends up corrupting data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 37 Even if you plan to use Redis primarily as a performance optimization or caching layer persisting the data means you can prewarm a new Redis node which avoids the thundering herd issue that we discussed earlier To c reate a new Redis cluster from a backup snapshot see Seeding a New Cluster with an Externally Created Backup in the Amazon ElastiCache for Redis User Gu ide You can also use a Redis snapshot to scale up to a larger Amazon EC2 instance type To do so follow this process: 1 Suspend writes to your existing ElastiCache cluster Your application can continue to do reads 2 Take a snapshot by following the procedu re in the Creating a Manual Snapshot section in the Amazon ElastiCache for Redis User Guide Give it a distinctive name that you will remember 3 Create a new Ela stiCache Redis cluster and specify the snapshot you took preceding to seed it 4 Once the new ElastiCache cluster is online reconfigure your application to start writing to the new cluster Currently this process will interrupt your application's ability to write data into Redis If you have writes that are only going into Redis and that cannot be suspended you can put those into Amazon SQS while you are resizing your ElastiCache cluster Then once your new ElastiCache Redis cluster is ready you can run a script that pulls those records off Amazon SQS and writes them to your new Redis cluster Cluster Scaling and Auto Discovery Scaling your application in response to changes in demand is one of the key benefits of working with AWS Many customers find that configuring their client with a list of node DNS endpoints for ElastiCache works perfectly fine But let's look at how to sca le your ElastiCache Memcached cluster while your application is running and how to set up your application to detect changes to your cache layer dynamically Auto Scaling Cluster Nodes Amazon ElastiCache does not currently support using Auto Scaling to sc ale the number of cache nodes in a cluster To change the number of cache nodes you can use either the AWS Management Console or the AWS API to modify the cluster For more This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 38 information refer to Modifying an ElastiCache Cache Cluster in the Amazon ElastiCache for Memcached User Guide In practice you usually don't want to regularly change the number of cache nodes in your Memcached cluster Any change to your cache nodes will result in some percentage of cache keys being remapped to new (empty) nodes which means a performance impact to your application Even with consistent hashing you will see an impact on your application when adding or removing nodes Auto Discovery of Memcached Nodes The ElastiCache Clients with Auto Discovery for Java NET and PHP support Auto Discovery of new ElastiCache Memcached nodes For Ruby the open source library dallielasticache provides autodiscovery support and django elasticache is available for Python Django In other languages you'll need to implement autodiscovery yourself Luckily this implementation is very easy The overall Auto Discovery mechanism is outlined in the How Auto Discovery Works topic in the Amazon ElastiCache for Memcached User Guide Basically ElastiCache adds a special Memcached configuration variable called cluster that contains the DNS names of the current cache nodes To access this list your application connects to your cache cluster configuration endpoint which is a hostname ending in cfgregion cacheamazonawscom After you retrieve the list of cache node host names your application configures its Memcached client to connect to the list of cache nodes using consistent hashing to balance across them Here is a complete working example in Ruby: require 'socket' require 'dalli' socket = TCPSocketnew( 'mycache 2az2vq55cfgusw2cacheamazonawscom' 11211 ) socketputs("config get cluster") header = socketgets version = socketgets nodelist = socketgetschompsplit(/ \s+/)map{|l| lsplit(' |')first } socketclose # Configure Memcached client This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 39 cache = Dalli::Clientnew(nodelist) Using Linux utilities you can even do this from the command line using netcat which can be useful in a script: ec2host$ echo "config get cluster" | \ nc mycache2az2vq55cfgusw2cacheamazonawscom 11211 | \ grep 'cacheamazonawscom' | tr ' ' ' \n' | cut d'|' f 1 mycache2az2vq550001usw2cacheamazonawscom mycache2az2vq550002usw2cacheamazonawscom Using Auto Discovery your Amazon EC2 application servers can locate Memcached nodes as they are added to a cache cluster However once your application has an open socket to a Memcached instance it won't necessarily detect any changes to the cache node list that might happen later To make this a complete solution two more things are needed: • The ability to scale cache nodes as needed • The ability to trigger an application reconfiguration on the fly Cluster Reconfiguration Events from Amazon SNS Amazon ElastiCache publishes a number of notifications to Amazon SNS when a cluster change happens such as a configuration change or replacement of a node Because these notifications are sent through Amazon SNS yo u can route them to multiple endpoints including email Amazon SNS or other Amazon EC2 instances For a complete list of Amazon SNS events that ElastiCache publishes see the Event Notifications and Amazon SNS topic for Redis or Memcached in the Amazon ElastiCache User Guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Perfor mance at Scale with Amazon ElastiCache Page 40 If you want your application to dynamical ly detect nodes that are being added or removed you can use these notifications as follows Note that the following process is not required to deal with cache node failures If a cache node fails and is replaced by ElastiCache the DNS name will remain th e same Most client libraries should automatically reconnect once the cache node becomes available again The two most interesting events that ElastiCache publishes at least for the purposes of scaling our cache are ElastiCache:AddCacheNodeComplete and ElastiCache:RemoveCacheNodeComplete These events are published when cache nodes are added or removed from the cluster By listening for these events your application can dynamically reconfigure itself to detect the new cache nodes The basic process for using Amazon SNS with your application is as follows: 1 Create an Amazon SNS topic for your ElastiCache alerts as described in Managing ElastiCache Amazon SNS Notifications in the Amazon ElastiCache User Guide for Redis or Memcached 2 Modify your application code to subscribe to this Amazon SNS topic All of your application instances will listen to the same topic See the blog post Receiving Amazon SNS Messages in PHP for details and code examples 3 When a cache node is added or removed you will receive a corresponding Amazon SNS message At that point your application needs to be able to rerun the Auto Discovery code we discussed preceding to get the updated cache node list 4 After your application has the new list of cache nodes it a lso reconfigures its Memcached client accordingly Again this workflow is not needed for cache node recovery —only if nodes are added or removed dynamically and you want your application to dynamically detect them Otherwise you can simply add the new ca che nodes to your application's configuration and restart your application servers To accomplish this with zero downtime to your app you can leverage solutions such as zero downtime deploys with Elastic Beanstalk Conclusion Proper use of in memory cach ing can result in an application that performs better and costs less at scale Amazon ElastiCache greatly simplifies the process of deploying an inmemory cache in the cloud By following the steps outlined in this paper you can easily deploy an ElastiCac he cluster running either Memcached or Redis on AWS and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Performance at Scale with Amazon ElastiCache Page 41 then use the caching strategies we discussed to increase the performance and resiliency of your application You can change the configuration of ElastiCache to add remove or resize nodes as your ap plication's needs change over time in order to get the most out of your in memory data tier Contributors Contributors to this document include : • Marcelo França Sr Partner Solutions Architect Amazon Web Services • Nate Wiger Amazon Web Services • Rajan Timalsina Cloud Support Engineer Amazon Web Services Document Revisions Date Description March 30 2021 Reviewed for technical accuracy July 2019 Corrected broken links added links to libraries and incorporated minor text updates throughout May 2015 First publication
|
General
|
consultant
|
Best Practices
|
Practicing_Continuous_Integration_and_Continuous_Delivery_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlPracticing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps First Publi shed June 1 2017 Updated October 27 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlContents The challenge of software delivery 1 What is continuous integration and continuous delivery/deployment? 2 Continuous integration 2 Continuous delivery and deployment 2 Continuous delivery is not continuous deployment 3 Benefits of continuous delivery 3 Implementing continuous integration and continuous del ivery 4 A pathway to continuous integration/continuous delivery 5 Teams 9 Testing stages in continuous integration and continuous delivery 10 Building the pipeline 13 Pipeline integration with AWS CodeBuild 22 Pipeline integration with Jenkins 23 Deployment methods 24 All at once (in place deployment) 26 Rolling deployment 26 Immutable and blue/green deplo yments 26 Database schema changes 27 Summary of best practices 28 Conclusion 29 Further reading 29 Contributors 30 Document revisions 30 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAbstract This paper explains the features and benefits of using continuous integration and continuous delivery (CI/CD) along with Amazon Web Services (AWS) tooling in your software development environment Continuous integration and continuous delivery are best practices and a vital part of a DevOps initiative This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 1 The challenge of software delivery Enterprises today face the challenge s of rapidly changing competitive landscapes evolving security requirements and performance scalability Enterprises must bridge the g ap between operations stability and rapid feature development Continuous integration and continuous delivery (CI/CD) are practice s that enable rapid software changes while maintaining system stability and security Amazon realized early on that the busine ss needs of delivering features for Amazoncom retail customers Amazon subsidiaries and Amazon Web Services (AWS) would require new and innovative ways of delivering software At the scale of a company like Amazon thousands of independent software teams must be able to work in parallel to deliver software quickly securely reliably and with zero tolerance for outages By learning how to deliver software at high velocity Amazon and other forward thinking organizations pioneered DevOps DevOps is a combination of cultural philosophies practices and tools that increase an organization’s ability to deliver applications and services at high velocity Using DevOps principles organizations can evolve and improve products at a faster pace than organizations that use traditional software development and infrastructure management processes This speed enables organizations to better serve their customers and compete more effective ly in the market Some of these principles such as twopizza teams and microservices/ service oriented architecture (SOA) are out of the scope of thi s whitepaper This whitepaper discuss es the CI/CD capability that Amazon has built and continuously improved CI/CD is key to delivering software features rapidly and reliably AWS now offers these CI/CD capabilities as a set of developer services: AWS CodeStar AWS CodeCommit AWS CodePipeline AWS CodeBuild AWS CodeDeploy and AWS CodeArtifact Developers and IT operations professionals practicing DevOps can use these services to rapidly safely and securely deliver software Together they help you securely store and apply version control to your application's source code You can use AWS CodeStar to rapidly orchestrate an end toend software release workflow using these services For an existing envi ronment CodePipeline has the flexibility to integrate each service independently with your existing tools These are highly available easily integrated services that can be accessed through the AWS Management Console AWS application programming interfac es (APIs ) and AWS software development toolkits ( SDKs ) like any other AWS service This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 2 What is continuous integration and continuous delivery /deployment ? This section discusses the practices of continuous integration and continuous delivery and explain s the d ifference between continuous delivery and continuous deployment Continuous integration Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository after which automated builds and tests are run CI most often refers to the build or integration stage of the software release process and requires both an automation component ( for example a CI or build service) and a cultural component ( for example learning to integrate frequentl y) The key goals of CI are to find and address bugs more quickly improve software quality and reduce the time it takes to validate and release new software updates Continuous integration focuses on smaller commits and smaller code changes to integrate A developer commits code at regular intervals at minimum once a day The developer pulls code from the code repository to ensure the code on the local host is merged before pushing to the build server At this stage the build server runs the various test s and either accepts or rejects the code commit The basic challenges of implementing CI include more frequent commits to the common codebase maintaining a single source code repository automating builds and automating testing Additional challenges inc lude testing in similar environments to production providing visibility of the process to the team and allowing developers to easily obtain any version of the application Continuous delivery and deployment Continuous delivery (CD) is a software developm ent practice where code changes are automatically built tested and prepared for production release It expands on continuous integration by deploying all code changes to a testing environment a production environment or both after the build stage has b een completed Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points When continuous delivery is properly implemented developers always have a deployment ready build artifact that h as passed through a standardized test process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 3 With continuous deployment revisions are deployed to a production environment automatically without explicit approval from a developer making the entire software release process automated This in turn all ows for a continuous customer feedback loop early in the product lifecycle Continuous delivery is not continuous deployment One misconception about continuous delivery is that it means every change committed is applied to production immediately after passing automated tests However t he point of continuous delivery is not to apply every change to production immediately but to ensure that every change is ready to go to production Before deploying a change to production you can implement a decision process to ensure that the production deployment is authorized and audited This decision can be made by a person and then executed by the tooling Using continuous deliver y the decision to go live becomes a business decision not a technical one The technical validation happens on every commit Rolling out a change to production is not a disruptive event Deployment doesn’t require the technical team to stop working on th e next set of changes and it doesn’t need a project plan handover documentation or a maintenance window Deployment becomes a repeatable process that has been carried out and proven multiple times in testing environments Benefits of continuous deliver y CD provides numerous benefits for your software development team including automating the process improving developer productivity improving code quality and delivering updates to your customers faster Automate the software release process CD provides a method for your team to check in code that is automatically built tested and prepared for release to production so that your software delivery is efficient resilient rapid and secure Improve developer productivity CD practices help your te am’s productivity by freeing developers from manual tasks untangling complex dependencies and returning focus to delivering new features in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 4 software Instead of integrating their code with other parts of the business and spending cycles on how to deploy this code to a platform developers can focus on coding logic that delivers the features you need Improve code quality CD can help you discover and address bugs early in the delivery process before they grow into larger problems later Your team can easil y perform additional types of code tests because the entire process has been automated With the discipline of more testing more frequently teams can iterate faster with immediate feedback on the impact of changes This enables teams to drive quality code with a high assurance of stability and security Developers will know through immediate feedback whether the new code works and whether any breaking changes or bugs were introduced Mistakes caught early on in the d evelopment process are the easiest to fix Deliver updates faster CD helps your team deliver updates to customers quickly and frequently When CI/CD is implemented the velocity of the entire team including the release of features and bug fixes is increa sed Enterprises can respond faster to market changes security challenges customer needs and cost pressures For example if a new security feature is required your team can implement CI/CD with automated testing to introduce the fix quickly and reliab ly to production systems with high confidence What used to take weeks and months can now be done in days or even hours Implementing continuous integration and continuous delivery This section discuss es the ways in which you can begin to implement a CI/C D model in your organization This whitepaper doesn’t discuss how an organization with a mature DevOps and cloud transformation model builds and uses a CI/CD pipeline To help you on your DevOps journey AWS has a number of certified DevOps Partners who can provide resources and tooling For more information on preparing for a move to the AWS Cloud refer to the AWS Building a Cloud Operating Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 5 A pathway to continuous integration /continuous delivery CI/CD can be pictured as a pipeline ( refer to the following figure ) where new code is submitted on one end tested over a series of stages (source build staging and production) and then published as production ready code If your organization is new to CI/CD it can approach this pipeline in an iterative fashion This means that you should start small and iterate at each stage so that you can understand and develop your code in a way that will help your organization grow CI/CD pipeline Each stage of the CI/CD pipeline is structured as a logical unit in the delivery process In addition each stage acts as a gate that vets a certain aspe ct of the code As the code progresses through the pipeline the assumption is that the quality of the code is higher in the later stages because more aspects of it continue to be verified Problems uncovered in an early stage stop the code from progressin g through the pipeline Results from the tests are immediately sent to the team and all further builds and releases are stopped if software does not pass the stage These stages are suggestions You can adapt the stages based on your business need Some s tages can be repeated for multiple types of testing security and performance Depending on the complexity of your project and the structure of your teams some stages can be repeated several times at different levels For example the end product of one team can become a dependency in the project of the next team This means that the first team’s end product is subsequently staged as an artifact in the next team’s project The presence of a CI/CD pipeline will have a large impact on maturing the capabilit ies of your organization The organization should start with small steps and not try to build a fully mature pipeline with multiple environments many testing phases and automation in all stages at the start Keep in mind that even organizations that hav e highly mature CI/CD environments still need to continuously improve their pipelines This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 6 Building a CI/CD enabled organization is a journey and there are many destinations along the way The next section discuss es a possible pathway that your organization could take starting with continuous integration through the levels of continuous delivery Continuous integration Continuous integration —source and build The first phase in the CI/CD journey is to develop maturity in continuous integration You should ma ke sure that all of the developers regularly commit their code to a central repository (such as one hosted in CodeCommit or GitHub) and merge all changes to a release branch for the application No developer should be holding code in isolation If a featur e branch is needed for a certain period of time it should be kept up to date by merging from upstream as often as possible Frequent commits and merges with complete units of work are recommended for the team to develop discipline and are encouraged by th e process A developer who merges code early and often will likely have fewer integration issues down the road You should also encourage developers to create unit tests as early as possible for their applications and to run these tests before pushing the code to the central repository Errors caught early in the software development process are the cheapest and easiest to fix When the code is pushed to a branch in a source code repository a workflow engine monitoring that branch will send a command to a builder tool to build the code and run the unit tests in a controlled environment The build process should be sized appropriately to handle all activities including pushes and tests that might happen during the commit stage for fast feedback Other qua lity checks such as unit test coverage style check and static analysis can happen at this stage as well Finally the builder tool creates one or more binary builds and other artifacts like images stylesheets and documents for the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 7 Conti nuous delivery : creating a staging environment Continuous delivery —staging Continuous delivery (CD) is the next phase and entails deploying the application code in a staging environment which is a replica of the production stack and running more functional tests The staging environment could be a static environment premade for testing or you could provision and configure a dynamic environment with committed infrastructure and configuration code for testing and deploying the application code Continuous delivery : creating a production environment Continuous delivery —producti on In the deployment/delivery pipeline sequence after the staging environment is the production environment which is also built using infrastructure as code (IaC) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 8 Continuous deployment Continuous deployment The final phase in the CI/CD deployment pip eline is continuous deployment which may include full automation of the entire software release process including deployment to the production environment In a fully mature CI/CD environment the path to the production environment is fully automated whi ch allows code to be deployed with high confidence Maturity and beyond As your organization matures it will continue to develop the CI/CD model to include more of the following improvements: • More staging environments for specific performance compliance security and user interface (UI) tests • Unit tests of infrastructure and configuration code along with the application code • Integration with other systems and processes such as code review issue tracking and event notification • Integration with database schema migration (if applicable) • Additional steps for auditing and business approval Even the most mature organizations that have complex multi environment CI/CD pipelines continue to look for improvements DevOps is a journey not a destination Feedback about the pipeline is continuously collected and improvements in speed scale security and reliability are achieved as a collaboration between the different parts of the development teams This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 9 Teams AWS recommends organizing three developer teams for impleme nting a CI/CD environment: an application team an infrastructure team and a tools team ( refer to the following figure ) This organization represents a set of best practices that have been developed and applied in fast moving startups large enterprise or ganizations and in Amazon itself The teams should be no larger than groups that two pizzas can feed or about 10 12 people This follows the communication rule that meaningful conversations hit limits as group sizes increase and lines of communication mu ltiply Application infrastructure and tools teams Application team The application team creates the application Application developers own the backlog stories and unit tests and they develop features based on a specified application target This team’s organizational goal is to minimize the time these developers spend on non core application tasks In addition to having functional programming skills in the application language the application team should have platform skills and an u nderstanding of system configuration This will enable them to focus solely on developing features and hardening the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 10 Infrastructure team The infrastructure team writes the code that both creates and configures the infrastructure needed to run the application This team might use native AWS tools such as AWS CloudFormation or generic tools such as Chef Puppet or Ansible The infrastructure team is responsible for specifying what resources are needed and it works closely with the applic ation team The infrastructure team might consist of only one or two people for a small application The team should have skills in infrastructure provisioning methods such as AWS CloudFormation or HashiCorp Terraform The team should also develop configu ration automation skills with tools such as Chef Ansible Puppet or Salt Tools team The tools team builds and manages the CI/CD pipeline They are responsible for the infrastructure and tools that make up the pipeline They are not part of the two pizza team; however they create a tool that is used by the application and infrastructure teams in the organization The organization needs to continuously mature its tools team so that the tools team stays one step ahead of the maturing application and infrastructure teams The tools team must be skilled in building and integrating all parts of the CI/CD pipeline This includes building source control repositories workflow engines build environments testing frameworks and artifact repositories This team may choose to implement software such as AWS CodeStar AWS CodePipeline AWS CodeCommit AWS CodeDeploy AWS CodeBuild and AWS CodeArtifact along with Jenkins GitHub Artifactory TeamCity and other similar tools Some organizations might call this a DevOps team but AWS discourage s this and instead encourage s thinking of DevOps as the sum of the people processes and tools in software delivery Testing stages in continuous integration and continuous delivery The three CI/CD teams should incorporate te sting into the software development lifecycle at the different stages of the CI/CD pipeline Overall testing should start as early as possible The following testing pyrami d is a concept provided by Mike Cohn in the book Succeeding with Agile It shows the various software tes ts in relation to their cost and the speed at which they run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 11 CI/CD testing pyramid Unit tests are on the bottom of the pyramid They are both the fastest to run and the least expensive Therefore unit tests should make up the bulk of your testing strategy A good rule of thumb is about 70 percent Unit tests should have near complete code coverage because bugs caught in this phase can be fixed quickly and cheaply Service component and integration tests are above unit tests on the pyramid These tests require detailed environments and therefore are more costly in infrastructure requirements and slower to run Performance and compliance tests are the next level They require production quality environments and are more expensive yet UI an d user acceptance tests are at the top of the pyramid and require production quality environments as well All of these tests are part of a complete strategy to assure high quality software However for speed of development emphasis is on the number of t ests and the coverage in the bottom half of the pyramid The following sections discuss the CI/CD stage s Setting up the source At the beginning of the project it’s essential to set up a source where you can store your raw code and configuration and sch ema changes In the source stage choose a source code repository such as one hosted in GitHub or AWS CodeCommit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 12 Setting up and running builds Build automation is essential to the CI process When set ting up build automation the first task is to choose t he right build tool There are many build tools such as: • Ant Maven and Gradle for Java • Make for C/C++ • Grunt for JavaScript • Rake for Ruby The build tool that will work best for you depend s on the programming language of your project and the skill set of your team After you choose the build tool all the dependencies need to be clearly defined in the build scripts along with the build steps It’s also a best practice to version the final build artifacts which makes it e asier to deploy and to keep track of issues Building In the build stage t he build tools will take as input any change to the source code repository build the software and run the following types of tests : Unit testing – Tests a specific section of code to ensure the code does what it is expected to do The unit testing is performed by software developers during the development phase At this stage a static code analysis data flow analysis code coverage and other software verification pro cesses can be applied Static code a nalysis – This test is performed without actually executing the application after the build and unit test ing This analysis can help to find coding errors and security holes and it also can ensure conformance to coding guidelines Staging In the staging phase full environments are created that mirror the eventual production environment T he following tests are performed: Integ ration testing – Verifies the interfaces between components against software design Integration testing is an iterative process and facilitates building robust interfaces and system integrity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 13 Component testing – Tests message passing between various components and their outcomes A key goal of this testing could be idempotency in component testing Tests can include extremely large data volumes or edge situations and abnormal inputs System testing – Tests the system end toend and verifies i f the software satisfies the business requirement This might include testing the user interface ( UI) API backend logic and end state Performance testing – Determines the responsiveness and stability of a system as it performs under a particular worklo ad Performance testing also is used to investigate measure validate or verify other quality attributes of the system such as scalability reliability and resource usage Types of performance tests might include load tests stress tests and spike tes ts Performance tests are used for benchmarking against predefined criteria Compliance testing – Checks whether the code change complies with the requirements of a nonfunctional specification and/or regulations It determines if you are implementing and m eeting the defined standards User acceptance testing – Validate s the end toend business flow This testing is executed by an end user in a staging environment and confirm s whether the system meets the requirements of the requirement specification Typically customers employ alpha and beta testing methodologies at this stage Production Finally after passing the previous tests the staging phase is repeated in a production environment In this phase a final Canary test can be completed by deploying the new code only on a small subset of servers or even one server or one AWS Region before deploying code to the entire production environment Specifics on how to safely deploy to production are covered in the Deployment Methods section The next section discusses building the pipeline to incorporate these stages and tests Building the pipeline This section discusses building the pipeline Start by establishing a pipeline with just the components needed for CI and then transition later to a continuous delivery pipeline with more components and stages This section also discusses how you can consider using AWS Lambda functions and manual approvals for large projects plan for multiple teams branches and AWS Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 14 Starting with a minimum viable pipeline for continuous integration Your organization’s journey toward continuous delivery begins with a minimum viable pipeline (MVP) As discussed in Implementing continuous integration and continuous delivery teams can start with a very simple process such as implementing a pipeline that performs a code style check or a single unit test without deployment A key component is a continuou s delivery orchestration tool To help you build this pipeline Amazon developed AWS CodeStar AWS CodeStar uses AWS CodePipeline AWS CodeBuild AWS CodeCommit and AWS CodeDeploy with an integrated setup process tools templates and dashboard AWS CodeStar provides everything you need to quickly develop build and deploy applications on AWS This allows you to start releasing code faster Customers who are already fam iliar with the AWS Management Console and seek a higher level of control can manually configure their developer tools of choice and can provision individual AWS services as needed AWS CodeStar setup page This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 15 AWS CodePipeline is a CI/CD service that can be used through AWS CodeStar or through the AWS Management Console for fast and reliable application and infrastructure updates AWS CodePipeline builds tests and deploys your code every time there is a code change based on the release process models you define This enables you to rapidly and reliably deliver features and updates You can easily build out an end toend solution by using our pre built plugins for popular third party services like GitHub or by integrating your own custom plugins into any stage of your release process With AWS CodePipeline you only pay for what you use There are no upfront fees or long term commitments The steps of AWS CodeStar and AWS CodePipeline map directly to the source build staging and production CI/CD stages While continuous delivery is desirable you could start out with a simple two step pipeline that checks the source repository and performs a build action: AWS CodeStar dashboard This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 16 AWS CodePipeline source and build stages For AWS CodePipeline the source stage can accept inputs from GitHub AWS CodeCommit and Amazon Simple Storage Service ( Amazon S3) Automating the build process is a critical first step for implementing continuous delivery and m oving toward continuous deployment Eliminating human involvement in producing build artifacts removes the burden from your team minimizes errors introduced by manual packaging and allows you to start packaging consumable artifacts more often AWS CodePi peline works seamlessly with AWS CodeBuild a fully managed build service to make it easier to set up a build step within your pipeline that packages your code and runs unit tests With AWS CodeBuild you don’t need to provision manage or scale your own build servers AWS CodeBuild scales continuously and processes multiple builds concurrently so your builds are not left waiting in a queue AWS CodePipeline also integrates with build servers such as Jenkins Solano CI and TeamCity For example in the following build stage three actions (unit testing code style checks and code metrics collection) run in parallel Using AWS CodeBuild these steps can be added as new projects without any further effort in building or installing build servers to handle t he load This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 17 CodePipeline — build functionality The source and build stages shown in the figure AWS CodePipeline —source and build stages along with supporting processes and automation support your team’s transition toward a continuous integration At this level of maturity developers need to regularly pay attention to build and test results They need to grow and maintain a healthy unit test base as well This in turn bolster s the entire team’s confidence in the CI/CD pipeline and further s its adoption This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 18 AWS CodePipeline stages This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 19 Continuous delivery pipeline After the continuous integration pipeline has been implemented and supporting processes have been established your teams can start transitioning toward the continuous delivery pipeline This trans ition requires teams to automate both building and deploying applications A continuous delivery pipeline is characterized by the presence of staging and production steps where the production step is performed after a manual approval In the same manner the continuous integration pipeline was built your teams can gradually start building a continuous delivery pipeline by writing their deployment scripts Depending on the needs of an application some of the deployment steps can be abstracted by existing AWS services For example AWS CodePipeline directly integrates with AWS CodeDeploy a service that automates code deployments to Amazon EC2 instances and instances running on premises AWS OpsWorks a configuration management service th at helps you operate applications using Che f and to AWS Elastic Beanstalk a service for deploying and scaling web applications and services AWS has detailed documentation on how to implement and integrate AWS CodeDeploy with your infrastructure and pipeline After your team successfully automates the deployment of the application deployment stages can be expanded with various tests For example you can add other out ofthe box integrations with services like Ghost Inspector Runscope and others as shown in the following figure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 20 AWS CodePipeline —code tests in deployment stages Adding Lambda actions AWS CodeStar and AWS CodePipeline support integration with AWS Lambda This integration enables implementation of a broad set of tasks such as creating custom resources in your environment integrating with third party systems (such as Slack) and performing checks on your newly deployed environment Lambda functions can be used in CI/CD pipelines to do the following tasks: • Roll out changes to your environment by applying or updating an AWS CloudFormation template • Create resources on demand in one stage of a pipeline using AWS CloudFormation and delete them in another stage • Deploy application version s with zero downtime in AWS Elastic Beanstalk with a Lambda function that swaps Canonical Name record (CNAME ) values • Deploy to Amazon EC2 Container Service (ECS) Docker instances • Back up resources before building or deploying by creating an AMI snapshot • Add integration with third party products to your pipeline such as posting messages to an Internet Relay Chate ( IRC) client Manual approvals Add a n approval action to a stage in a pipeline at the point where you want the pipeline processing to stop so that someone with the required AWS Identity and Access Management (IAM) permissions can approve or reject the action This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 21 If the action is approved the p ipeline processing resumes If the action is rejected —or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping —the result is the same as an action failing and the pipeline processing does not continue AWS CodeDeploy —manual approvals Deploying infrastructure code changes in a CI/CD pipeline AWS CodePipeline lets you select AWS CloudFormation as a deployment action in any stage of your pipeline You can then choose the specific action you would like AW S CloudFormation to perform such as creating or deleting stacks and creating or executing change sets A stack is an AWS CloudFormation concept and represents a group of related AWS resources While there are many ways of provisioning Infrastructure as Co de AWS CloudFormation is a comprehensive tool recommended by AWS as a scalable complete solution that can describe the most comprehensive set of AWS resources as code AWS recommend s using AWS CloudFormation in an AWS CodePipeline project to track infrastructure changes and tests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 22 CI/CD for serverless applications You can also use AWS CodeStar AWS CodePipeline AWS CodeBuild and AWS CloudFormation to build CI/CD pipelines for serverless applications Serverless applications integrate managed services such as Amazon C ognito Amazon S3 and Amazon DynamoDB with event driven servi ce and AWS Lambda to deploy applications in a manner which doesn’t require managing servers If you are a serverless application developer you can use the combination of AWS CodePipeline AWS CodeBuild and AWS CloudFormation to automate the building te sting and deployment of serverless applications that are expressed in templates built with the AWS Serverless Application Model (SAM) For more information refer to the AWS Lambda documentation for Automating Deployment of Lambda based Applications You can also create secure CI/CD pipelines that follow your organization’s best practices with AWS Serverless Applicat ion Model Pipelines (AWS SAM Pipelines) AWS SAM Pipelines are a new feature of AWS SAM CLI that give you access to benefits of CI/CD in minutes such as accelerating deployment frequency shortening lead time for changes and reducing deployment errors AWS SAM Pipelines come with a set of default pipeline templates for AWS CodeBuild/CodePipeline that follow AWS deployment best practices For more information and to view the tutorial refer to the blog Introducing AWS SAM Pipelines Pipelines for multiple teams branches and AWS Regions For a large project it’s not uncommon for multiple project teams to work on different components If multiple teams use a single code repository it can be mapped so that each team has its own branch There should also be an integration or release branch for the final merge of the project If a service oriented or microservice architecture is used each team could have its own code repository In the first scenario if a single pipeline is used it’s possible that one team could affect the other teams’ progress by blocking the pipeline AWS recommend s that you crea te specific pipelines for team branches and another release pipeline for the final product delivery Pipeline integration with AWS CodeBuild AWS CodeBuild is designed to enable your organization to build a highly available build process with almost unlimi ted scale AWS CodeBuild provides quickstart environments for a number of popular languages plus the ability to run any Docker container that you specify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 23 With the advantages of tight integration with AWS CodeCommit AWS CodePipeline and AWS CodeDeploy a s well as Git and CodePipeline Lambda actions the CodeBuild tool is highly flexible Software can be built through the inclusion of a buildspecyml file that identifies each of the build steps including pre and post build actions or specified actions through the CodeBuild tool You can view detailed history of each build using the CodeBuild dashboard Events are stored as Amazon CloudWatch Logs log files CloudWatch Logs log files in AWS CodeBuild Pipeline integration with Jenkins You can use the Jen kins build tool to create delivery pipelines These pipelines use standard jobs that define steps for implementing continuous delivery stages However this approach might not be opt imal for larger projects because the current state of the pipeline doesn’t persist between Jenkins restarts implementing manual approval is not straightforward and tracking the state of a complex pipeline can be complicated Instead AWS recommend s that you implement continuous delivery with Jenkins by using the AWS Code Pipeline plugin This plugin allows complex workflows to be described using Groovy like domain specific language and can be us ed to orchestrate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 24 complex pipelines The AWS Code Pipeline plugin’s functionality can be enhanced by the use of satellite plugins such as the Pipeline Stage View Plugin which visualizes the current progress of stages defined in a pipeline or Pipeline Multibranch Plugin which groups builds from different branches AWS recommend s that you store your pipeline configuration in Jenkinsfile and have it checked into a source code repository This allows for tracking changes to pipeline code and becomes even more important when working with the Pipeline Multibranch Plugin AWS also reco mmend s that you divide your pipeline into stages This logically groups the pipeline steps and also enables the Pipeline Stage View Plugin to visualize the current state of the pipeline The following figure shows a sample Jenkins pipeline with four defin ed stages visualized by the Pipeline Stage View Plugin Defined stages of Jenkins pipeline visualized by the Pipeline Stage View Plugin Deployment methods You can consider multiple deployment strategies and variations for rolling out new versions of so ftware in a continuous delivery process This section discusses the most common deployment methods: all at once (deploy in place) rolling immutable and blue/green AWS indicates which of these methods are supported by AWS CodeDeploy and AWS Elastic Bean stalk The following table summarizes the characteristics of each deployment method This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 25 Table 1 Characteristics of deployment methods Method Impact of failed deployment Deploy time Zero downtime No DNS change Rollback process Code deployed to Deploy in place Downtime ☓ ✓ Redeploy Existing instances Rolling Single batch out of service Any successful batches prior to failure running new application version † ✓ ✓ Redeploy Existing instances Rolling with additional batch (beanstalk) Minimal if first batch fails otherwise similar to rolling † ✓ ✓ Redeploy New and existing instances Immutable Minimal ✓ ✓ Redeploy New instances Traffic splitting Minimal ✓ ✓ Reroute traffic and terminate new instances New instances Blue/green Minimal ✓ ☓ Switch back to old environmen t New instances † Varies depending on batch size This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 26 All at once (inplace deployment ) All at once (inplace deployment ) is a method you can use to roll out new application code to an existing fleet of servers This method replaces all the code in one deployment action It requires downtime because all servers in the fleet are updated at once There is no need to update exi sting DNS records In case of a failed deployment the only way to restore operations is to redeploy the code on all servers again In AWS Elastic Beanstalk this deployment is called all at once and is available for single and load balanced applications In AWS CodeDeploy this deployment method is called inplace d eployment with a deployment configuration of AllAtOnce Rolling deployment With rolling deployment the fleet is divided into portions so that all of the fleet isn’t upgraded at once During the deployment process two software versions new and old are running on the same fleet This method allows a zero downtime update If the deployment fails only the updated portion of the fleet will be affected A variation of the rolling deployment method called canary release involves deployment of the new sof tware version on a very small percentage of servers at first This way you can observe how the software behaves in production on a few servers while minimizing the impact of breaking changes If there is an elevated rate of errors from a canary deploymen t the software is rolled back Otherwise the percentage of servers with the new version is gradually increased AWS Elastic Beanstalk has followed the rolling deployment pattern with two deployment options rolling and rolling with additional batch These options allow the application to first scale up before taking servers out of service preserving full capability during the deployment AWS C odeDeploy accomplishes this pattern as a variation of an in place deployment with patterns like OneAtATime and HalfAtATime Immutable and blue/green deplo yment s The immutable pattern specifies a deployment of application code by starting an entirely new set of servers with a new configuration or version of application code This pattern leverages the cloud capability that new server resources are created with simple API calls The b lue/green deployment strategy is a type of immutable deployment which also requires creation of another environment Once the new environment is up and passed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 27 all tests traffic is shifted to this new deployment Crucial ly the old environment that is the “blue” environment is kept idle in case a rollback is needed AWS Elastic Beanstalk supports immutable and blue/green deployment patterns AWS CodeDeploy also supports the blue/green pattern For more information on how AWS services accomplish these immutable patterns refer to the Blue/Green Deployments on AWS whitepaper Database schema changes It’s common for modern software to have a database layer Typically a relational database is used which stores both dat a and the structure of the data It’s often necessary to modify the database in the continuous delivery process Handling changes in a relational database requires special consideration and it offers other challenges than the ones present when deploying application binaries Usually when you upgrade an application binary you stop the application upgrade it and then start it again You don't really bother about the application state which is handled outside of the application When upgrading databases you do need to consider state because a database contains much state but comparatively little l ogic and structure The database schema before and after a change is applied should be considered different versions of the database You could use tools such as Liquibase and Flyway to manage the versions In general those tools employ some variant of the following method s: • Add a table to the database where a database version is stored • Keep track of database change commands and bunch them together in versioned change sets In the case of Liquibase these changes are stored in XML files Flyw ay employs a slightly different method where the change sets are handled as separate SQL files or occasionally as separate Java classes for more complex transitions • When Liquibase is being asked to upgrade a database it looks at the metadata table and de termines which change sets to run in order to bring the database uptodate with the latest version This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 28 Summary of best practices The following are some best practice dos and don’ts for CI/CD Do: • Treat your infrastructure as code o Use version control for you r infrastructure code o Make use of bug tracking/ticketing systems o Have peers review changes before applying them o Establish infrastructure code patterns/designs o Test infrastructure changes like code changes • Put developers into integrated teams of no mor e than 12 self sustaining members • Have all developers commit code to the main trunk frequently with no long running feature branches • Consistently adopt a build system such as Maven or Gradle across your organization and standardize builds • Have develope rs build unit tests toward 100% coverage of the code base • Ensure that unit tests are 70% of the overall testing in duration number and scope • Ensure that unit tests are up todate and not neglected Unit test failures should be fixed not bypassed • Treat your continuous delivery configuration as code • Establish role based security controls (that is who can do what and when) o Monitor/track every resource possible o Alert on services availability and response times o Capture learn and improve o Share acc ess with everyone on the team o Plan metrics and monitoring into the lifecycle This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 29 • Keep and track standard metrics o Number of builds o Number of deployments o Average time for changes to reach production o Average time from first pipeline stage to each stage o Number of changes reaching production o Average build time • Use multiple distinct pipelines for each branch and team Don’t: • Have long running branches with large complicated merges • Have manual tests • Have manual approval processes gates code revi ews and security reviews Conclusion Continuous integration and continuous delivery provide an ideal scenario for your organization’s application teams Your developers simply push code to a repository This code will be integrated tested deployed test ed again merged with infrastructure go through security and quality reviews and be ready to deploy with extremely high confidence When CI/CD is used code quality is improved and software updates are delivered quickly and with high confidence that ther e will be no breaking changes The impact of any release can be correlated with data from production and operations It can be used for planning the next cycle too —a vital DevOps practice in your organization’s cloud transformation Further reading For mo re information on the topics discussed in this whitepaper re fer to the following AWS whitepapers: • Overview of Deployment Options on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 30 • Blue/Green Deployments on AWS • Setting up CI/CD pipeline by in tegrating Jenkins with AWS CodeBuild and AWS CodeDeploy • Implementing Microservices on AWS • Docker on AWS: Running Containers in the Cloud Contributors The following individuals and organizations contributed to this document: • Amrish Thakkar Principal Solutions Architect AWS • David Stacy Senior Consultant DevOps AWS Professional Services • Asif Khan Solutions Architect AWS • Xiang Shen Senior Solutions Architect AWS Document revisions Date Description October 27 2021 Updated co ntent June 1 2017 First publication
|
General
|
consultant
|
Best Practices
|
Provisioning_Oracle_Wallets_and_Accessing_SSLTLSBased_Endpoints_on_Amazon_RDS_for_Oracle
|
Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle February 2018 Copyright 2018 Amazoncom Inc or its affiliates All Rights Reserved Notices Licensed under the Apache License Version 20 (the "License") You may not use this file except in compliance with the License A copy of the License is located at http://awsamazoncom/apache20/ or in the "license" file accompanying this file This file is distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own in dependent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations c ontractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agre ement between AWS and its customers Contents Introduction 1 Creating and Uploading Custom Oracle Wallets 2 Creating and Uploading a Wallet with an Amazon S3 Certificate 3 Uploading a Customized Wallet Bundle 5 Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections 6 Using UTL_HTTP over an SSL/TLS Endpoint 7 Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint 7 Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) 7 Downloading a File fr om Amazon S3 to an RDS Oracle DB Instance 8 Uploading a File from RDS Oracle DB Instance to Amazon S3 8 Conclusion 9 Appendi x 9 Sample PL/SQL Procedure to Download Artifacts from Amazon S3 9 Sample PL/SQL Procedure to Send an Email Through Amazon SES 12 Abstract This paper explain s how to extend outbound network access on your Amazon Relational Database Service (Amazon RDS) for Oracle database instances to connect securely to remote SSL/TLS based endpoints SSL/TLS endpoints require one or more valid Certificate Authority (CA) certificates that can be bundled within an Oracle wallet By uploading Oracle wallets to your Amazon RDS for Oracle DB instances certain ou tbound network calls can be made aware of the uploaded Oracle wallets This enables outbound network traffic to access any SSL/TLS based endpoint that can be validated using the CA certificate bundle within the Oracle wallets Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 1 Introduction Amazon Relational Database Service (Amazon RDS ) is a managed relational database service that provides you with six familiar database engines to choose from including Amazon Aurora MySQL MariaDB Oracle Microsof t SQL Server and PostgreSQL1 You can use your existing database code applications and tools with Amazon RDS and RDS will handle routine database tasks such as provisioning patching backup recovery failure detection and repair With Amazon RDS you can use replication to enhance availability and reliability for production workloads Using the Multi AZ deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a s ynchronously replicated secondary database Amazon RDS for Oracle provides scalability performance monitoring and backup and restore support Multi AZ deployment for Oracle DB instances simplifies creating a highly available architecture This is becaus e a Multi AZ deployment contains built in support for automated failover from your primary database to a synchronously replicated secondary database in a different Availability Zone Amazon RDS for Oracle provides the latest version of Oracle Database with the latest Patch Set Updates (PSUs) Amazon RDS manages the database upgrade process on your schedule eliminating manual database upgrade and patching tasks Amazon Virtual Private Cloud (Amazon VPC) is a virtu al network dedicated to your AWS account2 It is logically isolated from other virtual networks in the AWS Cloud You can launch AWS resources such as Amazon RDS DB instance s or Amazon Elastic Compute Cloud (Ama zon EC2) instance s into your VPC3 When you create a VPC you specify IP address ranges subnet s routing tables and network gateways to your own data center and to the internet You can move RDS DB instances that are not already in a VPC into an existing VPC4 Outbound network access is only supported fo r Oracle DB instances in a VPC 5 Using outbound network access you can use PL/SQL code inside the database to initiate connections to servers elsewhere on the network This lets you use utilities such as UTL_HTTP UTL_TCP and UTL_SMTP to connect your DB instance to remote endpoints For example you can use UTL_MAIL or Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 2 UTL_SMTP to send emails or UTL_HTTP to communicate with external web servers By default an Amazon DNS server provides name resolutions for outbound traffic from the instances in your VPC Should you choose to resolve private domain names for outbound traffic you can configure a custom DNS server 6 Always take care when enabling outbound networking as attackers can use it as a vector to remove data from your systems In addition to other security best practices keep the following in mind: Carefully configure VPC security groups to only allow ingress from and egress to known netwo rks Use in database network access control lists (ACLs) to allow only trusted users to initiate connections out of the database Always upgrade to the latest release of Amazon RDS for Oracle to ensure you have the latest Oracle PSU and security fixes To protect the integrity and content of your data you should use Transport Layer Security (TLS also referred to as Secure Sockets Layer or SSL) to provide encryption and server verification By default outbound network access support s only external traffic over and to nonTLS/SSL mediums For TLS/SSL based traffic you can use Oracle wallets to store Certificate Authority (CA) certificates which enable the verification of remote entities You can make utilities that use outbound network access traffic (such as UTL_HTTP and UTL_SMTP ) aware of these wallets This enables outbound communication from your DB instance to remote endpoints over SSL In th is paper we discuss how to create Oracle wal lets and copy them to an Amazon RDS for Oracle DB instance using Amazon S3 We also demonstrate how to use a wallet to protect calls made using UTL_HTTP and UTL_SMTP utilities Creating and Uploading Custom Oracle Wallets To enable SSL/TLS connections from PL/SQL you can upload custom O racle wallet s to your Amazon RDS for Oracle DB instances These wallets can contain Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 3 public and private certificates to access SSL/TLS based endpoints from your RDS Oracle DB instances First you create an initial Oracle wal let containing an Amazon S3 certificate as a onetime setup Then you can securely upload any number of wallets to Amazon RDS for Oracle DB instances through Amazon S3 Creating and Upload ing a Wallet with an Amazon S3 Certificate 1 Download the Baltimore CyberTrust Root certificate7 2 Convert the certificate to the x509 PEM format openssl x509 inform der in BaltimoreCyberTrustRootcrt outform pem out BaltimoreCyberTrustRoot pem 3 Using the orapki utility 8 create a wallet and add the certificate This export s the wallet to a file named cwalletsso Alternatively if you don’t specify an auto login wallet you can use ewalletp12 In this case PL/SQL applications must provide a password when opening the wallet orapki wallet create wallet auto_login _only orapki wallet add wallet trusted_cert cert BaltimoreCyberTrustRoot pem auto_login_only orapki wallet display wallet 4 Using high level aws s3 commands with the AWS Command Line Interface ( CLI)9 create a n S3 bucket (or use an existing bucket) and upload the wallet artifact aws s3 mb s3:// <bucketname> aws s3 cp cwalletsso s3://<bucket name>/ 5 Generate a presigned URL for the wallet artifact By default presigned URLs are valid for an hour However you can set the expiration explicitly 10 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 4 aws s3 presign s3://<bucketname>/cwalletsso 6 Import the procedure provided in the Appendix into your RDS for Oracle DB instance 7 Using this procedure download the wallet from the S3 bucket a Create a directory for this initial wallet (Be sure to always store each wallet in its own director y) exec rdsadminrdsadmin_utilcreate_directory('S3_SSL_WALLET'); b Whitelist outbound traffic on Oracle’s ACL (using the ‘user’ defined earlier ) BEGIN DBMS_NETWORK_ACL_ADMINCREATE_ACL ( acl => 's3xml' description => 'AWS S3 ACL' principal => UPPER('&user') is_grant => TRUE privilege => 'connect'); COMMIT; END; / BEGIN DBMS_NETWORK_ACL_ADMINASSIGN_ACL ( acl => 's3xml' host => '* amazonawscom '); COMMIT; END; / c Using the procedure above fetch the wallet artifact uploaded earlier to the S3 bucket Replace the p_s3_url value with the presigned URL generated in step 5 (after stripping it to be HTTP instead of HTTPS) Although access to t his S3 wallet artifact is presigned it must be over HTTP Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 5 set define #; BEGIN s3_download_presigned_url ( p_s3_url => ' <URL from step 5> ' p_local_filename => 'cwalletsso' p_local_directory => 'S3_SSL_WALLET' ); END; / 8 Set the S3_SSL_WALLET path above for utl_http transactions DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)=' S3_SSL_WALLET '; utl_httpset_wallet('file:/' || l_wallet_path ); END; / At this point you can use the wallet to acces s any artifact (not limited to Oracle wallets) from Amazon S3 over SSL/TLS as long as you’re pointing to the wallet directory specified above Upload ing a Customized Wallet Bundle With the capability we’ve described in the previous procedure you can also download customized Oracle wallets (containing customized selections of publ ic or private CA certificates) For example you can create a new Oracle wallet containing a wallet bundle of your choice upload it to an S3 bucket and use one of the previo us procedures to securely download this wallet to a n Amazon RDS for Oracle DB instance 1 Create a new directory (named MY_WALLET for example) for this new wallet bundle Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 6 exec rdsadminrdsadmin_utilcreate_directory(' MY_WALLET '); 2 Download the new wallet artifacts from the S3 bucket to the new directory Notice that we’ve passed on the S3_SSL_WALLET directory from the initial setup above to validate against the S3 bucket certific ate The download is requested over HTTPS BEGIN s3_download_ presigned_url ( '<S3 URL>' p_local_filename => 'cwalletsso' p_local_directory => 'MY_WALLET' p_wallet_directory => ' S3_SSL_WALLET ' ); END; / 3 Run this procedure to use this newly uploaded wallet ( for example with UTL_ HTTP ) DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)='MY _WALLET' ; utl_httpset_wallet('file:/' || l_wallet_path ); END; / Similarly you can upload and use any generic wallet where it’s need ed Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections Oracle wallets containing CA certificate bundles allow SSL/TLS based outbound traffic to access any endpoint that can validate itself against o ne of the CA Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 7 certificate s in the bundle Here are a few examples of how you can use wallets to establish SSL/TLS outbound connections Using UTL_HTTP over a n SSL/TLS Endpoint Once you create a wallet accessing an endpoint over SSL/TLS requires setting the wallet path In this example robotstxt from statusawsamazoncom is accessed with an Oracle wallet containing Amazon’s CA certificate (obtained from https://wwwamazontrustcom/repository ) BEGIN utl_httpset_wallet('file:/rdsdbdata/userdirs/02'); END; / select utl_httprequest('https://statusawsamazoncom/robotstxt') as ROBOTS_TXT from dual; ROBOTS_TXT Useragent: * Allow: / Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint Database links can be established between RDS Oracle DB instances over an SSL/TLS endpoint as long as the SSL option is configured for each instance 11 No further setup is required Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) You can use Amazon SES to send emails on UTL_SMTP over SSL/TLS 1 Obtain the relevant AWS Region endpoint and credentials from Amazon SES 12 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 8 2 Obtain a Verisign Symantec based CA certificates13 3 Create or update an existing wallet containing the relevant certificate For this example assume that the wallet has been uploaded to a directory called SES_SSL_WALLET created through the RDSADMIN utility Using your Amazon SES SMTP credentials send an email through UTL_SMTP u sing this sample code snippet Downloading a File from Amazon S3 to an RDS Oracle DB Instance Using a utility similar to the s3_download_presigned_url procedure you can download files from Amazon S3 For e xample: BEGIN s3_download_presigned_url ( 'https:// <bucketname>s3amazonawscom/ <sub directory> /<file>?AWSAccessKeyId=' p_local_filename => ' <localfilename> ' p_local_directory => ' <targetlocaldirectory> ' p_wallet_directory => 'S3_SSL_ WALLET' ); END; / Uploading a File from RDS Oracle DB Instance to Amazon S3 Uploading an artifact from your database instance to Amazon S3 is possible through HTTP PUT multipart requests using AWS Signature Version 4 signing 14 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 9 Conclusion In this paper we explained how to create Oracle wallets containing CA certificate bundles and copy them to Amazon RDS for Oracle DB instances We also provided a few examples that show ed how you can use wallets to establish SSL/TLS based outbound connections You can ex tend t he steps highlighted in this paper to access any secure endpoint fro m your Amazon RDS Oracle DB instances Appendix Sample PL/SQL Procedure to Download Artifacts from Amazon S3 Define your user here define user='admin'; Directgrant required privs BEGIN rdsadminrdsadmin_utilgrant_sys_object('DBA_DIRECTORIES' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_HTTP' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_FILE' UPPER('&user')); END; Example download procedure CREATE OR REPLACE PROCEDURE s3_download_presigned_url ( p_s3_url IN VARCHAR2 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 10 p_local_filename IN VARCHAR2 p_local_directory IN VARCHAR2 p_wallet_directory IN VARCHAR2 DEFAULT NULL ) AS Local variables l_req utl_httpreq; l_wallet_path VARCHAR2(4000); l_fh utl_filefile_type; l_resp utl_httpresp; l_data raw(32767); l_file_size NUMBER; l_file_exists BOOLEAN; l_block_s ize BINARY_INTEGER; l_http_status NUMBER; Userdefined exceptions e_https_requires_wallet EXCEPTION; e_wallet_dir_invalid EXCEPTION; e_http_exception EXCEPTION; BEGIN Validate input IF (regexp_like(p_s3_url '^https:' 'i') AND p_wallet_directory IS NULL) THEN raise e_https_requires_wallet; END IF; Use wallet if specified IF (p_wallet_directory IS NOT NULL) THEN BEGIN SELECT directory_path INTO l_wallet_path FROM dba_directories WHERE upper(directory_name)=upper(p_wallet_directory); utl_httpset_wallet('file:' || l_wallet_path); EXCEPTION WHEN NO_DATA_FOUND THEN raise e_wallet_dir_invalid; END; END IF; Do HTTP request BEGIN Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 11 l_req := utl_httpbegin_request(p_s3_url 'GET' 'HTTP/11'); l_fh := utl_filefopen(p_local_directory p_local_filename 'wb' 32767); l_resp := utl_httpget_response(l_req); If we get HTTP error code write that instead l_http_s tatus := l_respstatus_code; IF (l_http_status != 200) THEN dbms_outputput_line('WARNING: HTTP response ' || l_http_status || ' ' || l_respreason_phrase || ' Details in ' || p_local_filename ); END IF; Loop over response and write to file BEGIN LOOP utl_httpread_raw(l_resp l_data 32766); utl_fileput_raw(l_fh l_data true); END LOOP; EXCEPTION WHEN utl_httpend_of_body THEN utl_httpend_respon se(l_resp); END; Get file attributes to see what we did utl_filefgetattr( location => p_local_directory filename => p_local_filename fexists => l_file_exists file_length => l_file_size block_size => l_block_size ); utl_filefclose(l_fh); dbms_outputput_line('wrote ' || l_file_size || ' bytes'); EXCEPTION WHEN OTHERS THEN utl_httpend_response(l_resp); utl_filefclose(l_fh); dbms_outputput_line(dbms_utilityform at_error_stack()); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 12 dbms_outputput_line(dbms_utilityformat_error_backtrace()); raise; END; EXCEPTION WHEN e_https_requires_wallet THEN dbms_outputput_line('ERROR: HTTPS requires a valid wallet location'); WHEN e_wallet_dir_invalid THEN dbms_outputput_line('ERROR: wallet directory not found'); WHEN others THEN raise; END s3_download_presigned_url; / Sample PL/SQL Procedure to Send an Email Through Amazon SES declare l_smtp_server va rchar2(1024) := 'email smtpuswest 2amazonawscom'; l_smtp_port number := 587; l_wallet_dir varchar2(128) := 'SES_SSL_WALLET'; l_from varchar2(128) := 'user@lorem ipsumdolar'; l_to varchar2(128) := 'user@lorem ipsumdolar'; l_user varchar2(12 8) := '<USERNAME>'; l_password varchar2(128) := '<PASSWORD>'; l_subject varchar2(128) := 'Test subject'; l_wallet_path varchar2(4000); l_conn utl_smtpconnection; l_reply utl_smtpreply; l_replies utl_smtpreplies; begin select 'file:/' || directory_path into l_wallet_path from dba_directories where directory_name=l_wallet_dir; Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 13 open a connection l_reply := utl_smtpopen_connection( host => l_smtp_server port => l_smtp_port c => l_conn wallet_path => l_wallet_path secure_connection_before_smtp => false ); dbms_outputput_line('opened connection received reply ' || l_replycode || '/' || l_replytext); get supported configs from server l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)code || '/' || l_replies(r)text); end loop; STARTTLS l_reply := utl_smtpstarttls(l_conn); dbms_outputput_line('starttls received reply ' || l_replycode || '/' || l_replytext); l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)c ode || '/' || l_replies(r)text); end loop; utl_smtpauth(l_conn l_user l_password utl_smtpall_schemes); utl_smtpmail(l_conn l_from); utl_smtprcpt(l_conn l_to); utl_smtpopen_data l_conn); utl_smtpwrite_data(l_conn 'Date: ' || to_char(SYSDATE 'DD MONYYYY HH24:MI:SS') || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'From: ' || l_from || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'To: ' || l_to || utl_tcpcrlf); utl_smtpwrite_data(l _conn 'Subject: ' || l_subject || utl_tcpcrlf); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 14 utl_smtpwrite_data(l_conn '' || utl_tcpcrlf); utl_smtpwrite_data(l_conn ' Test message ' || utl_tcpcrlf); utl_smtpclose_data(l_conn); l_reply := utl_smtpquit(l_conn); exception when oth ers then utl_smtpquit(l_conn); raise; end; / 1 https://awsamazoncom/rds/ 2 https://awsamazoncom/vpc/ 3 https://awsamazon com/ec2/ 4 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_VPCWo rkingWithRDSInstanceinaVPChtml#USER_VP CNon VPC2VPC 5 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_Oracleh tml#OracleConceptsONA 6 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracl eCommonDBATasksSystemhtml#Ap pendixOracleCommonDBATasksCust omDNS 7 https://wwwdigicertcom/digicert root certificateshtm 8 https://docsoraclecom/database/121/DBSEG/asoappfhtm#DBSEG610 9 http://docsawsamazoncom/cli/latest/userguide/using s3commandshtml 10 http://docsawsamazoncom/cli/latest/reference/s3/presignhtml Notes Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 15 11 https://docsawsamazoncom/Ama zonRDS/latest/UserGuide/AppendixOrac leOptionsSSLhtml 12 https://docsawsamazoncom/ses/latest/DeveloperGuide/send email smtphtml 13https://wwwsymanteccom/theme/roots 14 https://docsawsamazoncom/AmazonS3/latest/API/sigv4 authenticatio n HTTPPOSThtml
|
General
|
consultant
|
Best Practices
|
RealTime_Communication_on_AWS
|
RealTime Communication on AWS Best Practices for Designing Highly Available and Scalable Real Time Communication (RTC) Workloads on AWS February 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Fundamental Components of RTC Architecture 2 Softswitch/PBX 2 Session Border Controller (SBC) 3 PSTN Connectivity 3 Media Gateway (Transcoder) 3 WebRTC and WebRTC gateway 4 High Availability and Scalability on AWS 5 Floating IP Pattern for HA Between Active –Standby Stateful Servers 6 Load Balancing for Scalabili ty and HA with WebRTC and SIP 8 Cross Region DNS Based Load Balancing and Failover 11 Data Durability and HA with Persistent Storage 13 Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling 14 Highly Available WebRTC with Kinesis Video Streams 14 Highly Available SIP Trunking with Amazo n Chime Voice Connector 15 Best Practices from the Field 15 Create a SIP Overlay 15 Perform Deta iled Monitoring 17 Use DNS for Load Balancing and Floating IPs for Failover 18 Use Multiple Availability Zones 19 Keep Traffic within One Availability Zone and use EC2 Placement Groups 20 Use Enhanced Networking EC2 Instance Types 21 Security Considerations 21 Conclusion 22 Contributors 22 Document Revisions 23 Abstract Today many organizations are looking to reduce cost and attain scalability for realtime voice messaging and multimedia workloads This paper outlines the best practices for managing real time communication workloads on AWS and includes reference architectures to meet these requirements This paper serves as a guide for individuals familiar with real time communication on how to achieve high availability and scalability for these workloads Amazon Web Services RealTime Commun ication on AWS Page 1 Introduction Telecommunication applications using voice video and messaging as channels are a key requirement for many organizations and their end users These realtime communication (RTC) workloads have specific latency and availability requirements that can be met by following relevant design best practices In the past RTC workloads have been deployed in traditional on premises data centers with dedicated resources However due to a mature and burgeoning set of features RTC workloads can be deployed on Amazon Web Services (AWS) despite stringent service level requirements while also benefiting from scalability elasticity and high availability Today several custom ers are using AWS its partners and open source solutions to run RTC workloads with reduced cost faster agility the ability to go global in minutes and rich features from AWS services Customers leverage AWS features such as enhanced networking with a n Elastic Network Adapter (ENA) and the latest generation of Amazon Elastic Compute Cloud (EC2) instance s to benefit from data plane development kit (DPDK) single root I/O virtualization (SR IOV) huge pages NVM Express (NVMe) nonuniform memory access (NUMA) support as well as bare metal insta nces to meet RTC workload requirements These Instances offer n etwork bandwidth of up to 100 Gbps and commensurate packets per second delivering increased performance for network intensive applications For scaling Elastic Load Balancing offers Application Load Balancer which offer s WebS ocket support and Network Load Balancer that can handle millions of requests per second For network acceleration AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints in AWS It has support for static IP addresses for the load balancer For reduced latency cost and increased bandwidth throughput AWS Direct Connect establishes dedica ted network connection from on premises to AWS Highly available managed SIP trunking is provided by Amazon Chime Voice Connector Amazon Kinesis Video Streams with WebRTC easily stream real time two way media with high availability This pa per includes reference architectures that show how to set up RTC workloads on AWS and best practices to optimize the solutions to meet end user requirements while optimizing for the cloud The evolved packet core (EPC) is out of scope for this white paper but the best practices detailed can be applied to virtual network functions (VNFs) Amazon Web Services RealTime Communication on AWS Page 2 Fundamental Components of RTC Architecture In the telecommunications industry real time communication (RTC) commonly refer s to live media sessions between two endpoints with minimum latency These sessions could be related to: • A voice session between two parties (eg telephone system mobile VoIP) • Instant messaging (eg chatting IRC) • Live video session (eg videoconfer encing telepresence) Each of the preceding solutions has some components in common (eg components that provide authentication authorization and access control transcoding buffering and relay and so on ) and some components unique to the type of medi a transmitted (eg broadcast service messaging server and queues and so on ) This section focuses on defining a voice and video based RTC system and all of the related components illustrated in Figure 1 Figure 1: Essential architectural components for RTC Softswitch /PBX A softswitch or PBX is the brain of a voice telephone system and provides intelligence for establishing maintaining and routing of a voice call within or outside the enterprise Amazon Web Services RealTime Communication on AWS Page 3 by using different components All of the subscribers of the enterprise are required to register with the softswitch to receive or make a call An important functionality of the softswitch is to keep track of each subscriber and how to reach them by using the other components within the voice network Session Border Controller (SBC) A session border controller (SBC) sits at the edge of a voice network and keeps track of all incoming and outgoing traffic (both control and data planes ) One of the key responsibilit ies of an SBC is to protect the voice system from malicious use The SBC can be used to interconnect with session initiation protocol ( SIP) trunks for external connectivity Some SBCs also provide transcoding capabilities for converting CODECS from one format to another Finally most SBCs also provide NAT Traversal capabilities which aids in ensuring calls are established even across firewalled networks PSTN Connectivity Voice o ver IP (VoIP) solutions use PSTN Gateways and SIP Trunks to connect with legacy PSTN network s PSTN Gateway The p ublic switched telephone network (PSTN ) Gateway convert s the signaling (between SIP and SS7) and media ( between RTP and time division multiplexing [TDM ] using CODEC transcoding) PSTN Gateways always sit at the edge close to the PSTN network SIP Trunk In a SIP Trunk the enterprise does not terminate its calls onto a TDM (SS7 based) network but rather the flows between enterprise and te lco remain over IP Most of the SIP Trunks are established by using SBCs The enterprise must agree on the predefined security rules from telco such as allowing a certain range of IP addresses ports and so on Media Gateway ( Transcoder) A typical voice solution allows various types of CODECs Some of the common CODECs are G711 µ law for North America G711 A law for outside of North America G729 and G 722 When two devices that are using two different CODECs communicate with each other a media server translates the CODEC flow between the Amazon Web Services RealTime Communication on AWS Page 4 devices In other words a media gateway processes media and ensures that the end devices are able to communicate with each other WebRTC and WebRTC g ateway Web realtime communication (WebRTC ) allows you to establish a call from a web browser or request resources from the backend server by using API The technology is designed with cloud technology in mind and therefore provide s various API s which could be used to establish a call Since not all of the voice solution s (including SIP) support these API s the WebRTC gateway is required to translate API call s into SIP messages and vice versa Figure 2 shows a design pattern for a highly available WebRTC architecture The incoming traffic from WebRTC clients is balanced by an Amazon application load balancer with WebRTC running on EC2 instances that are part of an Auto Scal ing Group Figure 2: A basic topology of an RTC system for voice Another design pattern for SIP and RTP traffic is to use pairs of SBCs on Amazon EC2 in active passive mode across Availability Zones (Figure 3) Here an Elastic IP address can be dynamically moved between instances upon failure where DNS can not be used Amazon Web Services RealTime Communication on AWS Page 5 Figure 3: RTC architecture using Amazon EC2 in a VPC High Availability and Scalability on AWS Most providers of real time communications align with service levels that provide availability from 999% to 99999% Depending on the degree of high availability (HA) that you want you must take increasingly sophisticated measures along the full lifecycle of the application We re commend following these guidelines to achieve a robust degree of high availability : • Design the system to have no single point of failure Use automated monitoring failure detection and f ailover mechanisms for both stateless and stateful components Amazon Web Services RealTime Communication on AWS Page 6 o Single points of failure (SPOF) are commonly eliminated with an N+1 or 2N redundancy configuration where N+1 is achieved via load balancing among active–active nodes and 2N is achieved by a p air of nodes in active– standby configuration o AWS has several methods for achieving HA through both approaches such as through a scalable load balanced cluster or assuming an active–standby pair • Correctly instrument and test system availability • Prep are operating procedures for manual mechanisms to respond to mitigate and recover from the failure This section focus es on how to achieve no single point of failure using capabilities available on AWS Specifically this section describe s a subset of co re AWS capabilities and design patterns that allow you to build highly available real time communication applications on the platform Floating IP Pattern for HA Between Active–Standby Stateful Servers The Floating IP design pattern is a well known mechani sm to achieve automatic failover between an active and standby pair of hardware nodes (media servers) A static secondary virtual IP address is assigned to the active node Continuous monitoring between the active and standby node s detect s failure I f the active node fails the monitoring script assigns the virtual IP to the ready standby node and the standby node takes over the primary active function In this way the virtual IP floats between the active and standby node Applicability in RTC solutions It is not always possible to have multiple active instances of the same component in service such as an active –active cluster of N nodes An active –standby configuration provides the best mechanism for HA For example the stateful components in an RTC solution such as the media server or conferencing server or even an SBC or database server are well suited for an active –standby setup An SBC or media server has several long running sessions or channels active at a given time and in the case of the SBC active instance failing the endpoints can reconnect to the standby node without any client side configuration due to the floating IP Amazon Web Services RealTime Communication on AWS Page 7 Implementation on AWS You can implement this pattern on AWS using core capabilities in Amazon Elastic Compute Cloud ( Amazon EC2) Amazon EC2 API Elastic IP addresses and support on Amazon EC2 for secondary private IP addresses 1 Launch two EC2 instances to assume the role s of primary and secondary nodes where the primary is assumed to be in active state by default 2 Assign an additional secondary private IP address to the primary EC2 instance 3 An Elastic IP address which is similar to a virtual IP (VIP) is associated with the secondary private address This secondary private address is the address that is used by exte rnal endpoints to access the application 4 Some OS configuration is required to make the secondary IP address added as an alias to the primary network interface 5 The application must bind to this Elastic IP address In the case of Asterisk software you can configure the binding through advanced Asterisk SIP settings 6 Run a monitoring script —custom KeepAlive on Linux Corosync and so on —on each node to monitor the state of the peer node In the event that the current active node fails the peer detects th is failure and invokes the Amazon EC2 API to reassign the secondary private IP address to itself 7 Therefore the application that was listening on the VIP associated with the secondary private IP address becomes available to endpoints via the standby node Figure 4: Failover between stateful EC2 instances using Elastic IP address Amazon Web Services RealTime Communication on AWS Page 8 Benefits This approach is a reliable low budget solution that protects against failures at the EC2 instance infrastructure or application level Limitations and extensibility This design pattern is typically limited to within a single Availability Zone It can be implemented across two Availability Zones but with a variation In this case the Floating Elastic IP address is reassociated between active and standby node in different Availability Zone s via the reassociate elastic IP address API available In the failover implementation shown in Figure 4 calls in progress are dropped and endpoints must reconne ct It is possible to extend this implementation with replication of underlying session data to provide seamless failover of sessions or media continuity as well Load Balancing for Scalability and HA with WebRTC and SIP Load balancing a cluster of active instances based on predefined rules such as round robin affinity or latency and so on is a design pattern widely popularized by the stateless nature of HTTP request s In fact load balancing is a viable option in case of many RTC application components The load balancer acts as the reverse proxy or entry point for requests to the desired application which itself is configured to run in multiple active nodes simulta neously At any given point in time the load balancer directs a user request to one of the active nodes in the defined cluster Load balancers perform a health check against the nodes in their target cluster and do not send an incoming request to a node t hat fails the health check Therefore a fundamental degree of high availability is achieved by load balancing Also because a load balance r performs active and passive health check s against all cluster nodes in sub second intervals the time for failover is near instantaneous The decision on which node to direct is based on system rules defined in the load balancer including: • Round robin • Session or IP affinity which ensures that multiple requests within a session or from the same IP are sent to the same node in the cluster Amazon Web Services RealTime Communication on AWS Page 9 • Latency based • Load based Applicability in RTC Architectures The WebRTC protocol makes it possible for WebRTC Gateways to be easily load balanced via an HTTP based load balancer such as Elastic Load Balanc ing Application Load Bala ncer or Network Load Balancer With most SIP implementations relying on transport over both TCP and UDP network or connection level load balancing with support for both TCP and UDP based traffic is needed Load Balancing on AWS for WebRTC using Applicat ion Load Balancer and Auto Scaling In the case of WebRTC based communications Elastic Load Balanc ing provides a fully managed highly available and scalable load balancer to serve as the entry point for requests which are then directed to a target cluster of EC2 instances associated with Elastic Load Balancing Also because WebRTC requests are stateless you can use Amazon EC2 Auto Scaling to provide fully automated and controllable scalability elasticity and high availability The Application Load Balancer provides a fully managed load balancing service that is highly available using multiple Availability Zones and scalable This supports the load balancing of WebSoc ket requests that handle the signaling for WebRTC applications and bidirectional communication between the client and server using a long running TCP connection The Application Load Balancer also supports content based routing and sticky sessions routing requests from the same client to the same target using load balancer generated cookies If you enable sticky sessions the same target receives the request and can use the cookie to recover the session context Figure 5 shows the target topology Amazon Web Services RealTime Communication on AWS Page 10 Figure 5: WebRTC scalability and high availability architecture Implementation for SIP using Network Load Balancer or AWS Marketplace Product In the case of SIP based communications the connections are made over TCP or UDP with the majority of RTC applications using UDP If SIP/TCP is the s ignal protocol of choice then it is feasible to use the Network Load Balancer for fully managed highly available scalable and performan ce load balancing A Network Load Balancer operates at the connection level (Layer 4) routing connections to targets such as Amazon EC2 instances containers and IP addresses based on IP protoco l data Ideal for TCP or UDP traffic load balancing network load balanc ing is capable of handling millions of requests per second while maintaining ultra low latencies It is integrated with other popular AWS services such as AWS Auto Scaling Amazon Elastic Container Service ( Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) and A WS CloudFormation If SIP connections are initiated another option is to use AWS Marketplace commercial offtheshelf software (COTS) The AWS Marketplace offers many products that can handle UDP and other types of layer 4 connection load balancing These COTS typically include support for high availability and are commonly integrated with features Amazon Web Services RealTime Communication on AWS Page 11 such as AWS Auto Scaling to further enhance availability and scalabil ity Figure 6 shows the target topology: Figure 6: SIPbased RTC s calability with AWS Marketplace product Cross Region DNS Based Load Balancing and Failover Amazon Route 53 provi des a global DNS service that can be used as a public or private endpoint for RTC clients to register and connect with media applications With Amazon Route 53 DNS health checks can be configured to route traffic to healthy endpoints or to independently m onitor the health of your application The Amazon Route 53 Traffic Flow feature makes it easy for you to manage traffic globally through a variety of routing types including latency based routing geo DNS geoproximity and weighted round robin—all of whi ch can be combined with DNS Failover to enable a variety of low latency fault tolerant architectures The Amazon Route 53 Traffic Flow simple visual editor allows you to manage how your end users are routed to your application’s endpoints —whether in a sin gle AWS Region or distributed around the globe Amazon Web Services RealTime Communication on AWS Page 12 In the case of global deployments the latency based routing policy in Route 53 is especially useful to direct customers to the nearest point of presence for a media server to improve the quality of service associated with real time media exchanges Note that to enforce a failover to a new DNS address clien t caches must be flushed Also DNS changes may have a lag as they are propagated across global DNS servers You can manage the refresh interval for DNS lookups with t he Time to Live attribute This attribute is configurable at the time of setting up DNS p olicies To reach global users quickly or to meet the requirements of using a single public IP AWS Global Accelerator can also be used for cross region failover AWS Global Accelerator is a networking service that improves availability and performance for applications with both local and global reach AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints such as your Application Load Balancers Network Load Balancers or Amazon EC2 instances in single or multiple AWS Regions It uses the AWS global network to optimize the path from your users to your applications improving performance such as the latency of your TCP and UDP traffic AWS Global Accelerator continually monitors the health of your application endpoints and automatically redirects traffic to the nearest healthy endpoints in the event of current endpoints turn ing unhealthy For additional security requirements Accelerated Site toSite VPN uses AWS Global Accelerator to improve the performance of VPN connections by intelligently routing traffic through the AWS Global Network and AWS edge locations Amazon Web Services RealTime Communication on AWS Page 13 Figure 7: Interregion high availability design using AWS Global Accelerator or Amazon Route 53 Data Durability and HA with Persistent Storage Most RTC applications rely on persistent storage to store and access data for authentication authorization accounting (session data call detail records etc) operational monitoring and logging In a traditional data center ensuring high availability and durability for the persistent storage components (databases file systems and so on) typically requires heavy lifting via the setup of a SAN RAID design and processes for backup restore and failo ver processing The AWS Cloud greatly simplifies and enhances traditional data center practices around data durability and availability For object storage and file storage AWS services like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Fil e System (Amazon EFS) provide managed high availability and scalability Amazon S3 has a data durability of 11 nines For transactional data storage customers have the option to take advantage of the fully managed Amazon Relational Database Service (Amazo n RDS) that supports Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server with high availability deployments For the registrar function subscriber profile or accounting Amazon Web Services RealTime Communication on AWS Page 14 records storage (eg CDRs) the Amazon RDS provides a fault tolerant highly available and scalable option Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling AWS allows the chaining of features and the ability to incorporate custom serverless functions as a service based on infrastructure even ts One such design pattern that has many versatile uses in RTC applications is the combination of auto matic scaling lifecycle hooks with Amazon Cloud Watch Events Amazon Route 53 and AWS Lambda functions AWS Lambda functions can embed any action or logic Figure 8 demonstrate s how these features chained together can enhance system reliability and scalability with automation Figure 8: Auto matic scaling with dynamic u pdates to Amazon Route 53 Highly Available WebRTC with Kinesis Video Streams Amazon Kinesis Video Streams offers realtime media streaming via WebRTC allowing users to c apture process and store media streams for playback analytics and machine learning These streams are highly available scalable and compliant with WebRTC standards Amazon Kinesis Video Streams include a WebRTC signaling Amazon Web Services RealTime Communication on AWS Page 15 endpoint for fast peer discovery and secure connection establi shment It includes managed Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) end points for real time exchange of media between peers It also includes a free open source SDK that directly integrates with camera firmw are to enable secure communication with Kinesis Video Streams end points allowing for peer discovery and media streaming Finally it provides client libraries for Android iOS and JavaScript that allow WebRTC compliant mobile and web players to securely discover and connect with a camera device for media streaming and two way communication Highly Available SIP Trunking with Amazon Chime Voice Connector Amazon Chime Voice Connector delivers a pay asyougo SIP trunking service that enables companies to m ake and/or receive secure and inexpensive phone calls with their phone systems Amazon Chime Voice Connector is a low cost alternative to service provider SIP trunks or Integrated Services Digital Network (ISDN) Primary Rate Interfaces (PRIs) Customers ha ve the option to enable inbound calling outbound calling or both The service leverages the AWS network to deliver a highly available calling experience across multiple AWS Regions You can stream audio from SIP trunking telephone calls or forwarded SIP based media recording (SIPREC) feeds to Amazon Kinesis Video Streams to gain insights from business calls in real time You can quickly build applications for audio analytics through integration with Amazon Transcribe and other common machine learning lib raries Best Practices from the Field This section aims to summarize the best practices that have been implemented by some of largest and most successful AWS customers that run large real time Session Initiation Protocol (SIP) workloads AWS customers want ing to run their own SIP infrastructure in the public cloud would find these best practices valuable as they can help increase the reliability and resiliency of the system in case of different kinds of failures Although some of these best practices are SI P specific most of them are applicable to any real time communication application running on AWS Create a SIP Overlay AWS has a robust scalable and redundant network backbone that provides connectivity between different Regions When a network event such as a fiber cut degrades an Amazon Web Services RealTime Communication on AWS Page 16 AWS backbone link traffic is quickly failed over to redundant paths using network level routing protocols such as BGP This network level traffic engineering is a black box to AWS customers and most do not even notice these failover events However customers that run real time workloads such as voice high quality video and low latency messaging do sometimes notice these events So how can an AWS customer implement their own traffic engineering on top of what is provide d by AWS at the network level? The solution is deploying SIP infrastructure at many different AWS Regions As part of the call control features SIP also provides the ability to route calls through specific SIP proxies Figure 9: Using SIP routing to override network routing In Figure 9 SIP infrastructure (represented by green dots) is running in all four US Regions The blue lines represent a fictional depiction of the AWS backbone If no SIP routing is implemented a call originating in the US west coast and destined for the US east coast goes over the backbone link that is directly connecting the Oregon and Virginia regions The diagram shows how a customer might override the network level routing and make the same call between Oregon and Virginia route d through California using SIP routing This type of SIP traffic engineering can be implemented using SIP proxies and media gateways based on network metrics such as SIP retransmissions and customer specific business preferences Amazon Web Services RealTime Communication on AWS Page 17 Perform Detailed Monitoring End users of real time voice and video applications expect the same level of performance as they achieve with traditional telephony services So when they experience issues with an application it ends up hurting the provider’s reputation To be proactive rather than reactive it is imperative that detailed monitoring be deployed at every part of the system that serves end users Figure 10: Using SIPp to Monitor VoIP Infrastructure Many open source tools such as iPerf or SIPp and VOIPMonitor are available that can be used to monitor SIP/RTP traffic In the preceding example nodes running SIPp in client and server modes are measuring SIP metrics such as Successful Calls and SIP Retransmits between all four US AWS Regions These metrics can then be exported into Amazon CloudWatch using a custom script Using CloudWatch customers can create alarms on these custom metrics based on a certain threshold value Automatic or manual remediation acti ons can then be taken based on the state of these CloudWatch alarms For customers not wanting to allocate engineering resources needed to develop and maintain a custom monitoring system many good VoIP monitoring solutions are Amazon Web Services RealTime Communicat ion on AWS Page 18 available on the market such as ThousandEyes An example of a remediation action is changing the SIP routing based on increased SIP retransmits Use DNS for Load Balancing and Floating IPs for Failover IP telephony clients that support DNS SRV capability can efficiently use the redundancy built into the infrastructure by load balancing clients to different SBCs/PBXs Figure 11: Using DNS SRV records to load balance SIP clients Figure 11 shows how customers can use the SRV records to load balance SIP traffic Any IP telephony client that supports the SRV standard will look for the sip_<transport protocol> prefix in an SRV type DNS record In the example the answer section from DNS conta ins both of the PBXs running in different AWS Availability Zones However in addition to the endpoint URIs the SRV record contains three additional pieces of information: • The first number is the Priority (1 in the example above ) A lower priority is preferred over higher • The second number is the Weight (10 in the example above ) Amazon Web Services RealTime Communication on AWS Page 19 • And the third number is the Port to be used (5060 ) Since the priority is the same (1) for both PBXs servers the clients use the w eight to load balance between the two PBXs In this case since the weights are the same SIP traffic should be load balanced equally between the two PBXs DNS can be a good solution for client load balancing but what about implementing failover by changing/updating DNS ‘A’ records? This method is d iscouraged because of inconsistency found in DNS caching behavior within the client and intermediate nodes A better approach for intra AZ failover between a cluster of SIP nodes is to use the EC2 IP reassignment where an impaired host’s IP address is inst antly reassigned to a healthy host by using the EC2 API Paired with a detailed monitoring and health check solution IP reassignment of a failed node ensures that traffic is moved over to a healthy host in a timely manner that minimizes end user disruptio n Use Multiple Availability Zones Each AWS Region is subdivided into separate Availability Zones Each Availability Zone has its own power cooling and network connectivity and thus forms an isolated failure domain Within the constructs of AWS it is a lways encouraged that customers run their workloads in more than one Availability Zone This ensures that customer applications can withstand even a complete Availability Zone failure a very rare event in itself This recommendation stands for real time SIP infrastructure as well Figure 12: Handling Availability Zone failure Let’s assume that a catastrophic event (such as Category 5 hurricane) causes a complete Availability Zone outage in the us east1 region With the infrastructure Amazon Web Services RealTime Communication on AWS Page 20 running as shown in the diagram all SIP clients that were originally registered with the node s in the failed Availability Zone should re register with the SIP nodes running in Availability Zone #2 (Test this behavior with your SIP clients/phones to make sure it is supported ) Although the active SIP calls at the time of the Availability Zone outage are lost any new calls are routed through Availability Zone #2 To summarize DNS SRV records should point the client to multiple ‘A’ records one in each Availability Zone Each of those ‘A’ records should in turn point to multiple IP addresses of SBCs/PBXs in that Availability Zone providing both intra and inter AZ resiliency Both intra and inter AZ failover can be implemented by using IP reassignment if the IPs are public Private IPs however cannot be reassigned across Availability Zone s If a customer is using private IP addressing then they would have to rely on the SIP clients re registering with the backup SBC/PBX for inter AZ failover Keep Traffic within One Availability Zone and use EC2 Placement Groups Also known as Availability Zone Affinity this best practice also applies to the rare event of a complete Availability Zone failure It is recommended that you eliminate any cross AZ traffic such that any SIP or RTP traffic that enters one Availability Zone should remain in that Availab ility Zone until it exits the Region Figure 13: Availability Zone Affinity (at most 50% of active calls are lost) Figure 13 shows a simplified architecture that uses Availability Zone Affinity The comparative advantage of this approach becomes clear if one accounts for the effects of a complete Availability Zone outage As depicted in the diagram if Availability Zone Amazon Web Services RealTime Communication on AWS Page 21 #2 is lost 50% of active calls are affected at most (assuming equal load balancing between Availability Zone s) Had Availability Zone Affinity not been implemented some calls would flow between Availability Zone s in one Region and a failure would most likely affect more than 50% of active calls Furthermore to minimize latency for traffic we also recommend that you consider using EC2 placement groups within each Availability Zone Instances launched within the same EC2 placement group have higher bandwidth and reduced latency as EC2 ensures netwo rk proximity of these instances relative to each other Use Enhanced Networking EC2 Instance Types Choosing the right instance type on Amazon EC2 ensure s system reliability as well as efficient usage of infrastructure EC2 provides a wide selection of inst ance types optimized to fit different use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications These enhanced net working instance types ensure that the SIP workloads running on them have access to consistent bandwidth and comparatively lower aggregate latency A recent addition to Amazon EC2 is the availability of the Elastic Network Adapter (ENA) that provides up to 100 Gbps of bandwidth The latest catalog of EC2 instance types and associated features can be found on the EC2 instance types page For most customers the latest generation of compute optimized instances should provide the best value for the cost For example the C5 N supports the new Elastic Network Adapter with bandwidth up to 100 Gbps with millions of packets per second (PPS) Most re altime applications would also benefit from using the Intel Data Plane Developer Kit (DPDK) which can greatly boost network packet processing However it is always a best practice to benchmark the various EC2 instance types according to your requirements to see which instance type works best for you Benchmarking also enable s you to find other configuration parameters such as the maximum number of calls a certain instance type can process at a time Security Considerations RTC application components typically run directly on internet facing Amazon EC2 instances In addition to TCP flows use protocols like UDP and SIP In these cases AWS Shield Standard protects Amazon EC2 instance s from common infrastructure layer (Layer 3 and 4) DDoS attacks such as UDP reflection attacks DNS reflection Amazon Web Services RealTime Communication on AWS Page 22 NTP reflection SSDP reflection and so on AWS Shield Standard uses various techniques like priority based traffic shaping that are automatically engaged when a welldefined DDoS attack signature is detected AWS also provides advanced protection against large and sophisticated DDoS attacks for these applications by enabling AWS Sh ield Advanced on Elastic I P addresses AWS Shield Advanced provides enhanced DDoS detection that automatically detects the type of AWS resource and size of EC2 instance and applies appropriate predefined mitigations with protections against SYN or UDP floo ds With AWS Shield Advanced customers can also create their own custom mitigation profiles by engaging the 24 x7 AWS DDoS Response Team (DRT) AWS Shield Advanced also ensures that during a DDoS attack all of your Amazon VPC Network Access Control Lists (ACLs) are automatically enforced at the border of the AWS network providing you with access to additional bandwidth and scrubbing capacity to mitigate large volumetric DDoS attacks Conclusion Real time communication (RTC) workloads can be deployed on Am azon Web Services (AWS) to attain scalability elasticity and high availability while meeting the key requirements Today several customers are using AWS its partners and open source solutions to run RTC workloads with reduced cost and faster agility a s well as a reduced global footprint The reference architectures and best practices provided in this white paper can help customers successfully set up RTC workloads on AWS and optimize the solutions to meet end user requirements while optimizing for the cloud Contributors The following individuals and organizations contributed to this document: • Ahmad Khan Senior Solutions Archi tect Amazon Web Services • Tipu Qureshi Principal Engineer AWS Support Amazon Web Services • Hasan Khan Senior Technical Acco unt Manager Amazon Web Services • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services Amazon Web Services RealTime Communication on AWS Page 23 Document Revisions Date Description February 2020 Updated for latest services and features October 2018 First publication
|
General
|
consultant
|
Best Practices
|
Regulation_Systems_Compliance_and_Integrity_Considerations_for_the_AWS_Cloud
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Regulation Systems Compliance and Integrity Considerations for the AWS Cloud November 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current p roduct offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Security and Shared Responsibility 1 Governance and Monitoring 2 AWS Regions 2 Business Continuity and Disaster Recovery 3 Conclusion 3 Reg SCI Workbook 4 Document Revisions 16 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This document provides information to assist SCI entities with running applications and services on the AWS cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 1 Introduction The US Securities and Exchange Commission adopted Regulation Systems Compliance and Integrity (Reg SCI) to strengthen the technology infrastructure of the US securities markets Reg SCI applies to entities that operate the core components of the securities markets including national securities exchanges clearing agencies securities information processors and alternative trading systems These SCI entities a re required to adopt an IT governance framework and system controls that ensure an adequate level of integrity availability resiliency capacity and security for systems that are necessary to maintain a fair and orderly securities market SCI entities m ust monitor systems for disruptions intrusions and compliance events and report these instances to the SEC and impacted market participants You should review the full text of Reg SCI here available here: https://wwwsecgov/rules/final/2014/34 73639pdf This document is not legal advice Security and Shared Responsibility Cloud security is a shared responsibility While AWS manages security of the cloud by ensuring that its infrastructure complies with global and regional regulatory requirements and best practices security in the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks no differently than they would for applications in an on site datacenter In order to help customers establish operate and leverage the AWS security control environment AWS has devel oped a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments Customers can review and download reports and details about more than 2500 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’ security and compliance documents including Service Organization Control (SOC) reports Payment Card Industry (PCI) reports AWS MAS TRM Workbook and certifications from accreditation bodies across geographies and compliance verticals This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 2 Governance and Monitoring While SCI entities are ultimately responsible for establishing a governance framework and monitoring their own environments AWS provides many tools to help customers efficiently achieve compliance For example AWS Config helps customers continuously monitor and record their AWS r esource configurations and automate the evaluation of recorded configurations against desired configurations Amazon CloudWatch allows customers to collect and track metrics collect and monitor log files set alarms and automatically react to changes in their AWS resources Customers use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health AWS provides up totheminute information on the AWS services that customers use to power thei r applications via the publicly available Service Health Dashboard Customers can configure a Personal Health Dashboard to receive a personalized view of the performance and availability of the AWS services underlying their resources and applications The dashboard displays relevant and timely information to help customers manage events in progress and it provides proactive notification to help customers plan for scheduled activities With Personal Health Dashboard changes in the health of AWS resources automatically trigger alerts providing event visibility and guidance to help quickly diagnose and resolve issues Customers can use these insights to react and keep their applications running smoothly AWS Regions The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”) A Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity hous ed in separate facilities These Availability Zones offer customers the ability to operate production applications and databases which are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates 42 Availability Zones within 16 geographic Regions around the world For current information on AWS Regions and AZs see https://awsamazoncom/about aws/global infrastructure/ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 3 Business Continuity and Disaster Recovery SCI entities must implement policies and procedures to ensure that their applicable systems have high levels of resiliency and availability Customers utilize AWS to enable faster disaster recovery of their IT sy stems without incurring the infrastructure expense of a second physical site With data centers in regions all around the world AWS provides a set of cloud based disaster recovery services that enable rapid recovery of customers’ IT infrastructure and data The AWS cloud supports many popular disaster recovery architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover Conclusion Proper Reg SCI implementation depends on the customer’s ability to leverage the resilient secure and elastic solutions that AWS provides Customers can decrease their operational risk and increase the security availability and resiliency of their systems by running well architected applications on the AWS Cloud Customers can option ally enroll in an Enterprise Agreement with AWS which customers can use to tailor agreements that best suit their needs For additional information on Enterprise Agreements please contact a sales representative This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 4 Reg SCI Workbook The Reg SCI Workbook provides additional information to help customers map their alignment to Reg SCI This is not legal or compliance advice Customers should consult with their legal and compliance teams Requirement Reference Requirement Implementation Implementation Considerations Obligations related to policies and procedures of SCI entities § 2421001(a)(1) Each SCI entity shall establish maintain and enforce written policies and procedures reasonably designed to ensure that its SCI systems and for purposes of security standards indirect SCI systems have levels of capacity integrity resiliency availability and security adequate to maintain the SCI entity’s operational capability and promote the maintenance of fair and orderly markets Policies and procedures required by this section shall include at a minimum: Shared Responsibility AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually Detailed information is provided in the AWS Security Whitepaper https://d0awsstaticcom/whitepapers/Security/AWS_Securi ty_Whitepaperpdf Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In the case of failure au tomated processes move customer data traffic away from the affected area Each Availability Zone is designed as an independent failure zone This means that Availability Zones are typically physically separated within a metropolitan region and are in different flood plains Customers utilize AWS to enable faster disaster recovery of their critical IT systems without incurring the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 5 Requirement Reference Requirement Implementation Implementation Considerations infrastructure expense of a second physical site The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover To learn more about AWS Disaster Recovery see http://mediaamazonwebservicescom/AWS_Disaster_ Recoverypdf § 2421001 (a)(2)(i) The establishment of reasonable current and future technological infrastructure capacity planning estimates Shared Responsibility AWS continuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model to assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Customers are responsible for capacity planning for their application In addition to ondem and capacity AWS offers Reserved Instances (RI); RIs can provide a capacity reservation offering additional confidence in your ability to launch the number of instances you have reserved when you need them § 2421001 (a)(2)(ii) Periodic capacity stress tests of such systems to determine their ability to process transactions in an accurate timely and efficient manner Shared Responsibility Customers should consider using Elastic Load Balancing (ELB) ELB automatically distributes incoming application traffic across multiple Amazon EC2 instances It enables you to achieve fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to route application traffic This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 6 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(2)(iii) A program to review and keep current systems development and testing methodology for such systems Shared Responsibility AWS employs a shared responsibility model for data ownership and security AWS operates manages and controls the infrastructure components from t he host operating system and virtualization layer down to the physical security of the facilities in which the services operate AWS Services in production operations are managed in a manner that preserves their confidentiality integrity and availability AWS has implemented secure software development procedures that are followed to ensure appropriate security controls are incorporated into the application design As part of the application design process new applications must participate in an AWS Secur ity review including registering the application initiating the application risk classification participating in the architecture review and threat modeling performing code review and performing a penetration test Customers assume responsibility and m anagement of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewalls and other security change management and logging features § 2421001 (a)(2)(iv) Regular reviews and testing as applicable of such systems including backup systems to identify vulnerabilities pertaining to internal and external threats physical hazards and natural or manmade disasters Shared Responsibility AWS tests the Business Continuity plan and its associated procedures at least annually to ensure effectiveness of the plan and the organization readiness to execute the plan Testing consists of engagement drills that execute on activities that would be performed in an actual outage AWS documents the results including lessons learned and any corrective actions that were completed As previously stated customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS Customers can request permission to conduct penetration testing to or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 7 Requirement Reference Requirement Implementation Implementation Considerations originating from any AWS resources as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Penetration tests s hould include customer IP addresses and not AWS endpoints AWS endpoints are tested as part of AWS compliance vulnerability scans Advance approval for these types of scans can be initiated by submitting a request using the AWS Vulnerability / Penetration Testing Request Form found here: https://awsamazoncom/security/penetration testing/ § 2421001 (a)(2)(v) Business continuity and disaster recovery plans that include maintaining backup and recovery capabilities sufficiently resilient and geographically diverse and that are reasonably designed to achieve next business day resumption of trading and twohour resumption of critical SCI systems following a widescale disruption; Shared Responsibility Learn how to architect DR in the AWS Cloud based on your specific requirements https://mediaamazonwebservicescom/AWS_Disaster_Re coverypdf Also consider the use of ELB health checks on their target EC2 instances and detect whether or not an instance and the app running on it are healthy combined with Auto Scaling groups to identify failing instances and cycle them out automatically with limited downtime" § 2421001 (a)(2)(vi) Standards that result in such systems being designed developed tested maintained operated and surveilled in a manner that facilitates the successful collection processing and dissemination of market data; and Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 8 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(2)(vii) Monitoring of such systems to identify potential SCI events Shared Responsibility One way to monitor your systems includes the use of Amazon CloudWatch a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these insights to react and keep your application running smoothly Visit here to learn more: https://awsamazoncom/cloudwatch/ § 2421001 (a)(3) Each SCI entity shall periodically review the effectiveness of the policies and procedures required by this paragraph (a) and take prompt action to remedy deficiencies in such policies and procedures Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 9 Requirement Reference Requirement Implementation Implementation Considerations § 2421001 (a)(4) For purposes of this paragraph (a) such policies and procedures shall be deemed to be reasonably designed if they are consistent with current SCI industry standards which shall be comprised of information technology practices that are widely available to information technology professionals in the financial sector and issued by an authoritative body that is a US governmental entity or agency association of US governmental entities or agencies or widely recognized organization Compliance with such current SCI industry standards however shall not be the exclusive means to comply with the requirements of this paragraph (a) Customer Responsibility § 2421001 (b) Each SCI entity shall establish maintain and enforce written policies and procedures reasonably designed to ensure that its SCI systems operate in a manner that complies with the Act and the rules and regulations thereunder and the entity’s rules and governing documents as applicable Customer Responsibility § 2421001 (c) Each SCI entity shall establish maintain and enforce reasonably designed written policies and procedures that include the criteria for identifying responsible SCI personnel the designation and documentation of responsible SCI personnel and escalation procedures to quickly inform responsible SCI personnel of potential SCI events Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 10 Requirement Reference Requirement Implementation Implementation Considerations Obligations related to SCI event § 2421002 (a) Upon any responsible SCI personnel having a reasonable basis to conclude that an SCI event has occurred each SCI entity shall begin to take appropriate corrective action which shall include at a minimum mitigating potential harm to investors and market integrity resulting from the SCI event and devoting adequate resources to remedy the SCI event as soon as reasonably practicable Shared Responsibility The AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you The Service Health Dashboard is publicly available and displays the general status of AWS services Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources The dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for scheduled activities With Personal Health Dashboard alerts are automatically triggered by changes in the health of AWS resources giving you event visibility and guidance to help quickly diagnose and resolve issues § 2421002(b) Commission notification and recordkeeping of SCI events Each SCI entity shall (1) notify the Commission of such SCI event immediately (2) Within 24 hours of any responsible SCI personnel having a reasonable basis to conclude that the SCI event has occurred submit a written notification pertaining to such SCI event to the Commission which shall be made on a good faith (3) Until such time as the SCI event is resolved and the SCI entity’s investigation of the SCI event is closed provide updates pertaining to such SCI event to the Commission on a regular basis or at such frequency as reasonably requested by a representative of the Commission (4) Continue to communicate action with the Commission until a final report is issued (5) Make keep and preserve records relating to all such SCI events Customer Responsibility Amazon Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup Customers can reliably store large or small amounts of data for as little as $0004 per gigabyte per month a significant savings com pared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon Glacier provides three options for access to archives from a few minutes to several hours Learn more here https://awsamazoncom/glacier/details/#Vault_Lock This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 11 Requirement Reference Requirement Implementation Implementation Considerations § 2421002 (c) Promptly after any responsible SCI personnel has a reasonable basis to conclude that an SCI event that is a systems disruption or systems compliance issue has occurred disseminate follow the requirements setforth within for dissemination of SCI events Customer Responsibility Obligations r elated to systems changes; SCI review § 2421003 (a) Within 30 calendar days after the end of each calendar quarter each SCI entity submit to the Commission a report describing completed ongoing and planned material changes to its SCI systems and the security of indirect SCI systems during the prior current and subsequent calendar quarters including the dates or expected dates of commencement and completion An SCI entity shall establish reasonable written criteria for identifying a change to its SCI systems and the security of indirect SCI systems as material and report such changes in accordance with such criteria Custom er Responsibility Customers can use the AWS Service Health Dashboard for detailed information on service disruptions § 2421003 (b) Each SCI entity shall: conduct an SCI review of the SCI entity’s compliance with Regulation SCI not less than once each calendar year; provided however that: (i) Penetration test reviews of the network firewalls and production systems shall be conducted at a frequency of not less than once every three years; and (ii) Assessments of SCI systems directly supporting market regulation or market surveillance shall be conducted at a frequency based upon the risk assessment conducted as part of the SCI review but in no case less than once every three years; and (2) Submit a report of the SCI review required by paragraph (b)(1) of this section to senior management of the SCI entity for review no more Shared Responsibility AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to the documented audit scheduled to review the continued performance of AWS against standards based criteria and to identify general improvement opportunit ies Compliance reports from these assessments are made available to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance A vendor or supplier evaluation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 12 Requirement Reference Requirement Implementation Implementation Considerations than 30 calendar days after completion of such SCI review; and (3) Submit to the Commission and to the board of directors of the SCI entity or the equivalent of such board a report of the SCI review required by paragraph (b)(1) of this section together with any response by senior management within 60 calendar days after its submission to senior management of the SCI entity can be performed by leveraging these reports and certifications Included in these audit reports is Vulnerability Management The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting security related activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tr acked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system AWS Security teams also subscribe to newsfeeds for applicable vendor flaws and proactively monitor vendors’ websites and other relevant outlets for new patches AWS customers also have the ability to report issues to AWS via the AWS Vulnerability Reporting website at: http://awsamazoncom/security/vulnerability reporting/ SCI entity business continuity and disaster recovery plans testing requirements for members or participants § 2421004 With respect to an SCI entity’s business continuity and disaster recovery plans including its backup systems each SCI entity shall: (a) Establish standards for the designation of those members or participants that the SCI entity reasonably determines are taken as a whole the minimum necessary for the maintenance of fair and orderly markets in the event of the activation of such plans; (b) Designate members or participants pursuant to the standards established in paragraph (a) of this section and require participation by such designated members or participants in scheduled functional and performance testing of the operation of such Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 13 Requirement Reference Requirement Implementation Implementation Considerations plans in the manner and frequency specified by the SCI entity provided that such frequency shall not be less than once every 12 months; and (c) Coordinate the testing of such plans on an industry or sector wide basis with other SCI entities Recordkeeping requirements related to compliance with Regulation SCI § 2421005 (a) An SCI SRO shall make keep and preserve all documents relating to its compliance with Regulation SCI as prescribed in §24017a1 of this chapterAn SCI entity that is not an SCI SRO shall: (1) Make keep and preserve at least one copy of all documents including correspondenc e memoranda papers books notices accounts and other such records relating to its compliance with Regulation SCI including but not limited to records relating to any changes to its SCI systems and indirect SCI systems; (2) Keep all such documents for a period of not less than five years the first two years in a place that is readily accessible to the Commission or its representatives for inspection and examination; and Customer Responsibility Amazon Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup Customers can reliably store large or small amounts of data for as little as $0004 per gigabyte per month a significant savings com pared to onpremises solutions To keep costs low yet suitable for varying retrieval needs Amazon Glacier provides three options for access to archives from a few minutes to several hours Learn more here https://awsamazoncom/glacier/details/#Vault_Lock Electronic filing and submission This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 14 Requirement Reference Requirement Implementation Implementation Considerations § 2421006 (a) Except with respect to notifications to the Commission made pursuant to § 2421002(b)(1) or updates to the Commission made pursuant to paragraph § 2421002(b)(3) any notification review description analysis or report to the Commission required to be submitted under Regulation SCI shall be filed electronically on Form SCI (§2491900 of this chapter) include all information as prescribed in Form SCI and the instructions thereto and contain an electronic signature; and (b) The signatory to an electronically filed Form SCI shall manually sign a signature page or document in the manner prescribed by Form SCI authenticating acknowledging or otherwise adopting his or her signature that appears in typed form within the electronic filing Such document shall be executed before or at the time Form SCI is electronically filed and shall be retained by the SCI entity in accordance with § 2421005 Customer Responsibility Requirements for service bureaus § 2421007 If records required to be filed or kept by an SCI entity under Regulation SCI are prepared or maintained by a service bureau or other recordkeeping service on behalf of the SCI entity the SCI entity shall ensure that the records are available for review by the Commission and its representatives by submitting a written undertaking in a form acceptable to the Commission by such service bureau or other recordkeeping service signed by a duly authorized person at such service bureau or other recordkeeping service Such a written undertaking shall include an agreement by the service bureau to permit the Commission and its representatives to examine such records at any time or from time Customer Responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 15 Requirement Reference Requirement Implementation Implementation Considerations to time during business hours and to promptly furnish to the Commission and its representatives true correct and current electronic files in a form acceptable to the Commission or its representatives or hard copies of any or all or any part of such records upon request periodically or continuously and in any case within t he same time periods as would apply to the SCI entity for such records The preparation or maintenance of records by a service bureau or other recordkeeping service shall not relieve an SCI entity from its obligation to prepare maintain and provide the Commission and its representatives access to such records This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Reg SCI Considerations for the AWS Cloud Page 16 Document Revisions Date Description November 2017 First publication
|
General
|
consultant
|
Best Practices
|
Right_Sizing_Provisioning_Instances_to_Match_Workloads
|
Right Sizing Provisioning Instances to Match Workloads January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Right Size Before Migrating 1 Right Sizing is an Ongoing Process 1 Overview of Amazon EC2 and Amazon RDS Instance Families 2 Identifying Opportunities to Right Size 4 Tools for Right Sizing 4 Tips for Developing Your Own Right Sizing Tools 5 Tips for Right Sizing 6 Right Size Using Performance Data 6 Right Size Based on Usage Needs 8 Right Size by Turning Off Idle Instances 8 Right Size by Selecting the Right Instance Family 9 Right Size Your Database Instances 10 Conclusion 10 Contributors 11 Document Revisions 11 Abstract This is the seventh in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from you r investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how to provision instances to match your workload performance and capacity requirements to optimize costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 1 Introduction Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirem ents which result s in lower costs Right sizing is a key mechanism for optimizing AWS costs but it is often ignored by organizations when they first move to the AWS Cloud They lift and shift their environments and expect to right size later Speed and performance are often prioritized over cost which result s in oversized instances and a lot of wasted spend on un used resources Right Siz e Before Migrati ng One reason for the waste is the mindset to overprovision that many IT professionals bring with them when they build their cloud infrastructure Historically IT departments have had to provision for peak demand However cloud environments minimize costs because capacity is provisioned based on averag e usage rather than peak usage When you learn how to right size you can save up to 70 % percent on your monthly bill The key to right sizing is to understand precis ely your organization’s usage needs and patterns and know how to take advantage of the elasticity of the AWS Cloud to respond to those needs By right sizing before a migration you can significantly reduce your infrastructure costs If you skip right sizing to save time your migration speed might be faster but you will end up with higher cloud infrastructure spend for a potentially long time Right Sizing is a n Ongoing Process To achieve cost optimization righ t sizing must become an ongoing process within your organization It’s important to right size when you first consider moving to the cloud and calculate total cost of ownership but it’s equally Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 2 important to right size periodically once you’re in the cloud to ensure ongoing costperformance optimization Why is it necessary to right size continually? Even if you right size workloads initially performance and capacity requirements can change over time which can result in underused or idle resources Additi onally new projects and workloads require additional cloud resources and overprovisioning is the likely outcome if there is no process in place to support right sizing and other cost optimization efforts You should r ight siz e your workloads at least once a month to control costs You can make ri ght sizing a smooth process by: • Having each team set up a right sizing schedule and then re port the savings to management • Monitoring costs closely using AWS cost and reporting tools such as Cost Explorer budgets and detailed billing reports in the Billi ng and Cost Management console • Enforcing tagging for all instances so that you can quickly identify attributes such as the instance own er application and environment (deve lopment/testing or production) • Understanding how to right size We first describe the types of instances that AWS offers and then discuss key considerations for right sizing your instances Overview of Amazon EC2 and Amazon RDS Instance Families Picking an Amazon Elastic Compute Cloud (Amazon EC2) instance for a given workload means finding the instance family that most closely matches the CPU and m emory needs of your workload Amazon EC2 provides a wide selection of instances which gives you lots of flexibility to right size your compute resources to match capacity needs at the lowest cost There are five families of EC2 instances with different op tions for CPU memory and network resources: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 3 • General purpose (includes T2 M3 and M4 instance types) – T2 instances are a very low cost option that provide a small amount of CPU resources that can be increased in short bursts when additional cycles are available They are well suited for lower throughput applications such as administrative applic ations or low traffic websites M3 and M4 instances provide a balance of CPU memory and network resources and are ideal for running small and midsize database s more memory intensive data processing tasks caching fleets and backend servers • Compute optimized (includes the C3 and C4 instance types ) – Have a higher ratio of virtual CPUs to memory than the other families and the lowest cost per virtual CPU of all the EC2 instance types Consider compute optimized instances first if you are running CPU bound scale out applications such as frontend fleets for high traffic websites on demand batch processing distributed analytics web servers video encoding a nd high performance scienc e and engineering applications • Memory optimized (includes the X1 R3 and R4 instance types ) – Designed for memory intensive applications these instances have the lowest cost per GiB of RAM of all EC2 instance types Use these instances if your application is memory bound • Storage optimized (includes the I3 and D2 instance types ) – Optimized to deliver tens of thousands of low latency random input/output ( I/O) operations per second (IOPS) to applications Storage optimize d instances are best for large deployments of NoSQL databases I3 instances are designed for I/O intensive workloads and equipped with super efficient NVMe SSD storage These instances can deliver up to 33 million IOPS in 4 KB blocks and up to 16 GB/secon d of sequential disk throughput D2 or dense storage instances are designed for workloads that require high sequential read and write access to very large data sets such as Hadoop distributed computing massively parallel processing data warehousing and logprocessing applications Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 4 • Accelerated computing (includes the P2 G3 and F1 instance types ) – Provide access to hardware based compute accelerators such as graphics processing units (GPUs) or field programmable gate arrays (FPGAs) Accelerated computin g instances enable more parallelism for higher throughput on compute intensive workloads Amazon Relational Database Service (Amazon RDS) database instances are similar to Amazon EC2 instances in that there are d ifferent families to suit different workloads These database instance families are optimized for memory performance or I/O: • Standard performance (includes the M3 and M4 instance types ) – Designed for general purpose database workloads that don’t run man y inmemory functions This family has the most options for provisioning increased IOPS • Burstable performance (includes T2 instance types ) – For workloads that require burstable performance capacity • Memory optimized (includes the R3 and R4 instance types ) – Optimized for in memory functions and big data analysis Identifying Opportunities to Right Size The first step in right sizing is to monitor and analyze your current use of services to gain insight into instance performance and usage patterns To gather sufficient data observe performance over at least a two week period (ideally over a onemonth period ) to capture the workload and business peak The most common metrics that define instance performance are vCPU utilization memory utilization network utilization and ephemeral disk use In rare cases where instances are selected for reasons other than these metrics it is important for the technical owner to review the right sizing effort Tools for Right Sizing You can use t he following tools to evaluate costs and monitor and analyze instance usage for right sizing : Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 5 • Amazon CloudWatch – Lets you observe CPU utilization network throughput and disk I/O and match the observed peak metrics to a new and cheaper instance type You can also regularly monitor Amazon EC2 Usage Reports which are updated several times a day and provide in depth usag e data for all your EC2 instances Typically this is feasible only for small environments given the time and effort required • AWS Cost Explorer – This free tool lets you dive de eper into your cost and usage data to identify trends pinpoint cost drivers and detect anomalies It includes Amazon EC2 Usage Reports which let you analyze the cost and usage of your EC2 ins tances over the last 13 months • AWS Trusted Advisor – Lets you inspect your AWS environment to identify idle and underutilized resources and provide s real time insight into service usage to help you improve system performance and reliability increase security and look for opportunities to save money • Third party monitoring tools such as CloudHealth Cloudability and CloudCheckr are also an option to automatically identify opportunities and suggest alternate instances These tool s have years of development effort and customer feedback points built into them They also provide additional cost management and optimization functionality Tips for Developing Your Own Right Sizing Tools You can also develop your own tools for monitoring and analyzing performance The following guidelines can help if you are considering this option : • Focus on instances that have run for at least half the time you’re looking at • Focus on instances with lower reserved in stance coverage • Exclude resources that have been switched off (reducing search effort) • Avoid conversions to older generation instances where possible • Apply a savings threshold below which right sizing is not worth considering • Make sure the following conditions are met before you switch to a new instance: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 6 o The vCPU of the new instance is equal to that of the old instance or the application’s observed vCPU is less than 80 % of the vCPU capacity of the new instance o The memory of th e new instance is equal to that of the old instance or the application’s obser ved memory peak is less than 80% of the memory capacity of the new instance Note: You can capture memory utilization metrics by using monitoring scripts that report these metric s to Amazon CloudWatch For more information see Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances o The network throughput of the new instance is equal to that of the old instance or the application ’s network peak is less than the network capacity of the new instance Note: Maximum NetworkIn and NetworkOut values are measured in bytes perminute Use the following formula to convert these metrics to megabit s per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024/1024 / 60 = Number of Mbps o If the ephemeral storage disk I/O is less than 3000 you can use Amazon Elastic Block Store (Amazon EBS) storage If not use instance families that have ephemeral storage For more information see Amazon EBS Volume Types Tips for Right Sizing This section offers tips t o help you right size your EC2 instances and RDS DB instances Right Siz e Using Performance Data Analyze performance data to right size your EC2 instances Identify idle instances and ones that are underutilized Key metrics to look for are CPU usage and m emory usage Identify instances with a maximum CPU usage and memory usage of less than 40 % over a four week period These are the instances that you will want to right size to reduce costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 7 For compute optimized instances keep the following in mind: • Focus on very recent instance data (old data may not be actionable) • Focus on instances that have run for at least half the time you’re looking at • Ignore burstable instance families (T2 instance types ) because these families are designed to typically run at lo w CPU percentages for significant periods of time For storage optimized instances (I2 and D2 instance types) where the key feature is high data IOPS focus on IOPS to see whether instances are overprovisioned Keep the following in mind for storage optim ized instances: • Different size instances have different IOPS ratings so tailor your reports to each instance type Start with your most commonly used storage optimized instance type • Peak NetworkIn and NetworkOut values are measured in bytes per minute U se the following formula to convert these metrics to megabits per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024 /1024/ 60 = Number of Mbps • Take note of how I/O and CPU percentage metrics change during the day and whether there are peaks that need to be accommodated Right size against memory if you find that maximum memory utilization over a fourweek period is less than 40 % AWS provides sample scripts for monitoring memory and disk space utilization on your EC2 instances running Linux You can configure the scripts to report the metrics to Amazon CloudWatch When analyzing performance data for Amazon RDS DB instances focus on the following metrics to determine whether actual usage is lower than instance capacity: • Average CPU utilization • Maximum CPU utilization • Minimum available RAM • Average number of bytes read from disk per second Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 8 • Average number of bytes written to disk per second Right Siz e Based on Usage N eeds As you monitor current performance identify the following usage needs and patterns so that you can take advantage of potential right sizing options: • Steady state – The load remains at a relatively constant level over time and you ca n accurately forecast the likely compute load For this usage pattern you might consider Reserved Instances which can provide significant savings • Variable but predictable – The load changes but on a predictable schedule Auto Scaling is well suited for applications that have stable demand patterns with hourly daily or weekly variability in usage You can use this feature to scale Amazon EC2 capacity up or down when you experience spiky traffic or pr edictable fluctuations in traffic • Dev/test/production – Development testing and production environments are typically used only during business hours and can be turned off during evenings weekends and holidays (You’ll need to rely on tagging to identify dev/test/production instances) • Temporary – For temporary workloads that have flexible start times and can be interrupted you can consider placing a bid for an Amazon EC2 Spot Instance instead of using an On Demand Instance Right Size by Turn ing Off Idle Instances The easiest way to reduce operational costs is to turn off instances that are no longer being used If you find instances that have been idle for more than two weeks it’s safe to stop or even terminate them Before terminating an insta nce that’s been idle for two weeks or less consider: • Who owns the instance? • What is the potential impact of terminating the instance? • How hard will it be to re create the instance if you need to restore it? Stopping an EC2 instance leaves any attached EBS volumes operational You will continue to be charged for these volumes until you delete them If you need the instance again you can easily turn it back on Terminating an instance Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 9 however automatically deletes attached EBS volumes and requires effort to re provision should the instance be needed again If you decide to delete an EBS volume consider storing a snapshot of the volume so that it can be restored later if needed Another simple way to reduce costs is to stop instances used in development and production during hours when these instances are not in use and then start them again when their capacity is needed Assuming a 50 hour work week you can save 70 % by automatically stopping dev/test/production instances during nonbusiness hours Many to ols are available to automate scheduling including Amazon EC2 Scheduler AWS Lambda and AWS Data Pipeline as well as thirdparty tools s uch as CloudHealth and Skeddly Right Siz e by Selecting the Right Instance Family You can right size an instance by migrating to a different model within the same instance family or by migrating to another instance family When migrating within the same instance family you only need to consider vCPU memory network throughput and ephemeral storage A good general rule for EC2 instances is that if your maximum CPU and memory usage is less than 40 % over a four week period you can safely cut the machine in half For example if you were using a c48xlarge EC2 you could move to a c44xlarge which would save $190 every 10 days When migrating to a different instance family make sure the current instance type and the new instance type are compatible in terms of virtualization type network and platform: • Virtualization type – The instances must have the same Linux AMI virtualization type (PV AMI versus HVM) and platform (EC2 Classic versus EC2 VPC) For more information see Linux AMI Virtualization Types • Network – Some instances are not supported in EC2 Classic and must be launched in a virtual private cloud (VPC) For more information see Instance Types A vailable Only in a VPC Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 10 • Platform – If your current instance type supports 32 bit AMIs make sure to select a new instance type that also supports 32 bit AMIs (not all EC2 instance types do) To check the platform of your instance go to the Instances scree n in the Amazon EC2 console and choose Show/Hide Columns Architecture When you resize an EC2 instance the resized instance usually has the same number of instance store volumes that you specified when you launched the original instance You cannot attac h instance store volumes to an instance after you’ve launched it so if you want to add instance store volumes you will need to migrate to a new instance type that contains the higher number of volumes Right Siz e Your Database Instances You can scale you r database instances by adjusting memory or compute power up or down as performance and capacity requirements change The following are some things to consider when scaling a database instance: • Storage and instance type are decoupled When you scale your database instance up or down your storage size remains the same and is not affected by the change • You can separately modify your Amazon RDS DB instance to increase the allocated storage space or improve the performance by changing the storage type (such a s General Purpose SSD to Provisioned IOPS SSD) • Before you scale make sure you have the correct licensing in place for commercial engines (SQL Server Oracle) especially if you Bring Your Own License (BYOL) • Determine when you want to apply the change Y ou have an option to apply it immediately or during the maintenance window specified for the instance Conclusion Right sizing is the most effective way to control cloud costs It involves continually analyzing instance performance and usage needs and patterns — and then turning off idle instances and right sizing instances that are either overprovisioned or poorly matc hed to the workload Because your resource needs are always changing right sizing must become an ongoing process to Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 11 continually achieve cost optimization You can make right sizing a smooth process by establishing a right sizing schedule for each team en forcing tagging for all instances and taking full advantage of the powerful tools that AWS and others provide to simplify resource monitoring and analysis Contributors Contributors to this document include: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document Revisions Date Description January 2020 Minor revisions March 2018 First publication
|
General
|
consultant
|
Best Practices
|
Robust_Random_Cut_Forest_Based_Anomaly_Detection_on_Streams
|
Robust Random Cut Forest Based Anomaly Detection On Streams Sudipto Guha SUDIPTO @CISUPENN EDU University of Pennsylvania Philadelphia PA 19104 Nina Mishra NMISHRA @AMAZON COM Amazon Palo Alto CA 94303Gourav Roy GOURA VR @AMAZON COM Amazon Bangalore India 560055Okke Schrijvers OKKES @CSSTANFORD EDU Stanford University Palo Alto CA 94305 Abstract In this paper we focus on the anomaly detection problem for dynamic data streams through thelens of random cut forests We investigate a robust random cut data structure that can be usedas a sketch or synopsis of the input stream Weprovide a plausible definition of nonparametricanomalies based on the influence of an unseenpoint on the remainder of the data ie the externality imposed by that point We show how thesketch can be efficiently updated in a dynamicdata stream We demonstrate the viability of thealgorithm on publicly available real data 1 Introduction Anomaly detection is one of the cornerstone problems indata mining Even though the problem has been well studied over the last few decades the emerging explosion ofdata from the internet of things and sensors leads us to reconsider the problem In most of these contexts the datais streaming and wellunderstood prior models do not exist Furthermore the input streams need not be append onlythere may be corrections updates and a variety of other dynamic changes Two central questions in this regard are(1) how do we define anomalies? and (2) what data structure do we use to efficiently detect anomalies over dynamicdata streams? In this paper we initiate the formal study ofboth of these questions For (1) we view the problem fromthe perspective of model complexity and say that a point isan anomaly if the complexity of the model increases substantially with the inclusion of the point The labeling of Proceedings of the 33rdInternational Conference on Machine Learning New Y ork NY USA 2016 JMLR: W&CP volume 48 Copyright 2016 by the author(s)a point is data dependent and corresponds to the external ity imposed by the point in explaining the remainder of thedata We extend this notion of externality to handle “outliermasking” that often arises from duplicates and near duplicate records Note that the notion of model complexity hasto be amenable to efficient computation in dynamic datastreams This relates question (1) to question (2) which wediscuss in greater detail next However it is worth notingthat anomaly detection is not well understood even in thesimpler context of static batch processing and (2) remainsrelevant in the batch setting as well For question (2) we explore a randomized approach akin to (Liu et al 2012) due in part to the practical success re ported in (Emmott et al 2013) Randomization is a pow erful tool and known to be valuable in supervised learning (Breiman 2001) But its technical exploration in the context of anomaly detection is not wellunderstood andthe same comment applies to the algorithm put forth in (Liuet al 2012) Moreover that algorithm has several lim itations as described in Section 41 In particular we show that in the presence of irrelevant dimensions crucial anomalies are missed In addition it is unclear howto extend this work to a stream Prior work attempted solutions (T a ne ta l 2011) that extend to streaming however those were not found to be effective (Emmott et al 2013) To address these limitations we put forward a sketch orsynopsis termed robust random cut forest (RRCF) formally defined as follows Definition 1 Arobust random cut tree (RRCT) on point setSis generated as follows: 1 Choose a random dimension proportional to /lscripti/summationtext j/lscriptj where/lscripti=m a x x∈Sxi−minx∈Sxi 2 Choose Xi∼Uniform[min x∈Sximaxx∈Sxi] 3 LetS1={x|x∈Sxi≤Xi}andS2=S\S1and recurse on S1andS2This document has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams A robust random cut forest (RRCF) is a collection of inde pendent RRCTs The approach in (Liu et al 2012) differs from the above procedure in Step (1) and chooses the dimension to cut uni formly at random We discuss this algorithm in more detailin Section 41and provide extensive comparison Following question (2) we ask: Does the RRCF data structure contain sufficient information that is independent ofthe specifics of the tree construction algorithm? In this pa per we prove that the RRCF data structure approximatelypreserves distances in the following sense: Theorem 1 Consider the algorithm in Definition 1 Let the weight of a node in a tree be the corresponding sum of dimensions/summationtext i/lscripti Given two points uv∈S define the tree distance between uandvto be the weight of the least common ancestor of uv Then the tree distance is always at least the Manhattan distance L1(uv) and in expectation at most O/parenleftBig dlog|S| L1(uv )/parenrightBig timesL1(uv) Theorem 1provides a low stretch distance preserving em bedding reminiscent of the JohnsonLindenstrauss Lemma(Johnson & Lindenstrauss 1984) using random projections forL 2()distances (which has much better dependence on d) The theorem is interesting because it implies that ifa point is far from others (as is the case with anomalies)that it will continue to be at least as far in a random cuttree in expectation The proof of Theorem 1follows along the same lines of the proof of approximating finite metric spaces by a collection of trees (Charikar et al 1998) Most of the proofs appear in the supplementary material The theorem shows that if there is a lot of empty space around a point ie γ=m i n vL1(uv)is large then we will isolate the point within O(dlog|S|/γ)levels from the root Moreover since for any p≥1 thepnormed dis tance satisfies d1−1/pLp(uv)≥L1(uv)≥Lp(uv)and therefore the early isolation applies to all large Lp()dis tances simultaneously This provides us a pointer towards the success of the original isolation forest algorithm in lowto moderate dimensional data because dis small and the probability of choosing a dimension is not as important if they are small in number Thus the RRCF ensemble contains sufficient information that allows us to determine dis tance based anomalies without focusing on the specificsof the distance function Moreover the distance scales are adjusted appropriately based on the empty spaces betweenthe points since the two bounding boxes may shrink afterthe cut Suppose that we are interested in the sample maintenance problem of producing a tree at random (with the correct probability) from T(S−{x}) or fromT(S∪{x})I n this paper we prove that we can efficiently insert and deletepoints into a random cut tree Theorem 2 (Section 3)Given a tree Tdrawn accordingtoT(S); if we delete the node containing the isolated point xand its parent (adjusting the grandparent accordingly see Figure 2) then the resulting tree T /primehas the same proba bility as if being drawn from T(S−{x}) Likewise we can produce a tree T/prime/primeas if drawn at random from T(S∪{x}) is time which is O(d)times the maximum depth of T which is typically sublinear in |T| Theorem 2demonstrates an intuitively natural behavior when points are deleted — as shown in the schematic in Figure 1 In effect if we insert x perform a few more op erations and then delete x then not only do we preserve distributions but the trees remain very close to each other — as if the insertion never happened This behavior is a classic desiderata of sketching algorithms xa b c (a) Before: Ta bc (b) After: T/prime Figure 1 Decremental maintenance of trees The natural behavior of deletions is not true if we do not choose the dimensions as in Step (1) of RRCF construction For example if we choose the dimensions uniformlyat random as in (Liu et al 2012) suppose we build a tree for(10)(/epsilon1/epsilon1)(01)where1/greatermuch/epsilon1>0and then delete (10) The probability of getting a tree over the two re maining points that uses a vertical separator is 3/4−/epsilon1/2 and not1/2 as desired The probability of getting that tree in the RRCF process (after applying Theorem 2)i s1−/epsilon1 as desired This natural behavior under deletions is also nottrue of most space partitioning methods –such as quadtrees(Finkel & Bentley 1974) kdtrees (Bentley 1975) and R trees (Guttman 1984) The dynamic maintenance of a dis tribution over trees in a streaming setting is a novel contri bution to the best of our knowledge and as a consequence we can efficiently maintain a tree over a sample of a stream: Theorem 3 We can maintain a random tree over a sample Seven as the sample Sis updated dynamically for stream ing data using sublinear update time and O(d|S|)space We can now use reservoir sampling (Vitter 1985) to main tain a uniform random sample of size |S|or a recency biased weighted random sample of size |S|(Efraimidis & Spirakis 2006) in space proportional to |S|on the fly In effect the random sampling process is now orthogo nal from the robust random cut forest construction For example to produce a sample of size ρ|S|forρ<1 in an uniform random sampling we can perform straight forward rejection sampling; in the recency biased sample ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams in (Efraimidis & Spirakis 2006) we need to delete the (1−ρ)|S|lowest priority points This notion of downsam pling via deletions is supported perfectly by Theorem 2– even for downsampling rates that are determined after the trees have been constructed during postprocessing Thus Theorem 4 Given a tree T(S)for sample S if there exists a procedure that downsamples via deletion then we have an algorithm that simultaneously provides us a downsampled tree for every downsampling rate Theorems 3and 4taken together separate the notion of sampling from the analysis task and therefore eliminates the need to fine tune the sample size as an initial parameterMoreover the dynamic maintenance of trees in Theorem 3 provides a mechanism to answer counterfactual questionsas given in Theorem 5 Theorem 5 Given a tree T(S)for sample S and a point pwe can efficiently compute a random tree in T(S∪{p}) and therefore answer questions such as: what would have been the expected depth had pbeen included in the sample? The ability to answer these counterfactual questions arecritical to determining anomalies Intuitively we label a pointpas an anomaly when the joint distribution of in cluding the point is significantly different from the distri bution that excludes it Theorem 5allows us to efficiently (pretend) sketch the joint distribution including the point p However instead of measuring the effect of the sampled data points on pto determine its label (as is measured by notions such as expected depth) it stands to reason that we should measure the effect of pon the sampled points This leads us to the definition of anomalies used in this paper 2 Defining Anomalies Consider the hypotheses: (a) An anomaly is often easy to describe – consider Waldo wearing a red fedora in a sea of dark felt hats While it may be difficult for us to find Waldo in a crowd ifwe could forget the faces and see the color (as is the case when Waldo is revealed by someone else) thenthe recognition of the anomaly is fairly simple (b) An anomaly makes it harder to describe the remainder of the data – if Waldo were not wearing the red fedora we may not have admitted the possibility that hats canbe colored In essence an anomaly displaces our at tention from the normal observation to this new one The fundamental task is therefore to quantify the shift in attention Suppose that we assign left branches the bit 0 and right branches the bit 1in a tree in a random cut forest Now consider the bits that specify a point (excluding thebits that are required to store the attribute values of the point itself) This defines the complexity of a random model M T which in our case corresponds to a tree Tthat fits the initialdata Therefore the number of bits required to express a point corresponds to its depth in the tree Given a set of points Zand a point y∈Zletf(yZT)be the depth of yin treeT Consider now the tree produced by deleting xas in Theorem 2asT(Z−{x}) Note that givenTandxthe treeT(Z−{x}) is uniquely1determined Let the depth of yinT(Z−{x}) bef(yZ−{x}T)(we drop the qualification of the tree in this notation since it is uniquely defined) xa b c10 10q0q r (a) Tree T(Z)a bc10q0q r (b) Tree T(Z−{x}) Figure 2 A correspondence of trees Consider now a point yin the subtree cin Figure 2a Its bit representation in Twould be q0q r00 The model complexity denoted as |M(T)|the number of bits required to write down the description of all points yin treeTtherefore will be |M(T)|=/summationtext y∈Zf(yZT)I f we were to remove xthen the new model complexity is |M(T/prime)|=/summationdisplay y∈Z−{x}f(yZ−{x}T/prime) whereT/prime=T(Z−{x}) is a tree over Z−{x}N o w consider the expected change in model complexity under a random model However since we have a many to onemapping from T(Z)toT(Z−{x}) as a consequence of Theorem 2 we can express the second sum over T(Z)in stead ofT /prime=T(Z−{x}) and we get ET(Z)[|M(T)|]−ET(Z−{x})[|M(T(Z−{x})|] =/summationdisplay T/summationdisplay y∈Z−{x}Pr[T]/parenleftbigg f(yZT)−f(yZ−{x}T/prime)/parenrightbigg +/summationdisplay TPr[T]f(xZT) (1) Definition 2 Define the bitdisplacement or displacement of a point xto be the increase in the model complexity of all other points ie for a set Z to capture the externality introduced by x define where T/prime=T(Z−{x}) DISP(xZ)=/summationdisplay Ty∈Z−{x}Pr[T]/parenleftbigg f(yZT)−f(yZ−{x}T/prime)/parenrightbigg 1The converse is not true this is a manytoone mapping ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Note the total change in model complexity is D ISP(xZ)+ g(xZ)whereg(xZ)=/summationtext TPr[T]f(xZT)is the ex pected depth of the point xin a random model Instead of postulating that anomalies correspond to large g() we fo cus on larger values of D ISP() The name displacement is clearer based on this lemma: Lemma 1 The expected displacement caused by a point x is the expected number of points in the sibling node of the leaf node containing x when the partitioning is done ac cording to the algorithm in Definition 1 Shortcomings While Definition 2points towards a pos sible definition of an anomaly the definition as stated arenot robust to duplicates or nearduplicates Consider onedense cluster and a point pfar from away from the cluster The displacement of pwill be large But if there is a point q very close to p thenq’s displacement in the presence of pis small This phenomenon is known as outlier masking Duplicates and near duplicates are natural and therefore the semantics of any anomaly detection algorithm has to ac commodate them Duplicate Resilience Consider the notion that Waldo has a few friends who help him hide – these friends are colluders; and if we were to get rid of all the colluders then the description changes significantly Specifically in stead of just removing the point xwe remove a set Cwith x∈C Analogous to Equation (1) E T(Z)[|M(T)|]−ET(Z−C)[|M(T(Z−C)|] =DISP(CZ)+/summationdisplay T/summationdisplay y∈CPr[T]f(yZT)(2) where D ISP(CZ)is the notion of displacement extended to subsets denoted as where T/prime/prime=T(Z−C) /summationdisplay Ty∈Z−CPr[T]/parenleftbigg f(yZT)−f(yZ−CT/prime/prime)/parenrightbigg (3) Absent of any domain knowledge it appears that the dis placement should be attributed equally to all the points inC Therefore a natural choice of determining Cseems to bemax D ISP(CZ)/|C|subject to x∈C⊆Z However two problems arise First there are too many subsets C and second in a streaming setting it is likely we would be using a sample S⊂Z Therefore the supposedly natural choice does not extend to samples To avoid both issues we al low the choice of Cto be different for different samples S; in effect we are allowing Waldo to collude with differentmembers in different tests! This motivates the following: Definition 3 The Collusive Displacement ofxdenoted by C ODISP(xZ|S|)of a point xis defined asE S⊆ZT⎡ ⎣max x∈C⊆S1 |C|/summationdisplay y∈S−C/parenleftbigg f(yST)−f(yS−CT/prime/prime)/parenrightbigg⎤ ⎦ Lemma 2 CODISP(xZ|S|)can be estimated efficiently While C ODISP(xZ|S|)is dependent on |S| the depen dence is not severe We envision using the largest sample size which is permitted under the resource constraints Wearrive at the central characterization we use in this paper: Definition 4 Outliers correspond to large C ODISP() 3 Forest Maintenance on a Stream In this section we discuss how Robust Random Cut Trees can be dynamically maintained In the following letRRCF(S)be a the distribution over trees by running Def inition 1onS Consider the following operations: Insertion: GivenTdrawn from distribution RRCF(S) andp/negationslash∈Sproduce a T /primedrawn from RRCF(S∪{p}) Deletion: GivenTdrawn from distribution RRCF(S)and p∈Sproduce a T/primedrawn from RRCF(S−{p}) We need the following simple observation Observation 1 Separating a point set Sandpusing an axisparallel cut is possible if and only if it is possible to separate the minimal axisaligned bounding box B(S)and pusing an axisparallel cut The next lemma provides a structural property about RRCFtrees We are interest in incremental updates with as fewchanges as possible to a set of trees Note that given a spe cific tree we have two exhaustive cases that (i) the new point which is to be deleted (respectively inserted) is notseparated by the first cut and (ii) the new point is deleted (respective inserted) is separated by the first cut Lemma 3 addresses these for collections of trees (not just a single tree) that satisfy (i) and (ii) respectively Lemma 3 Given point pand set of points Swith an axis parallel minimal bounding box B(S)such that p/negationslash∈B: (i) F or any dimension i the probability of choosing an axis parallel cut in a dimension ithat splits Susing the weighted isolation forest algorithm is exactly the same as the conditional probability of choosing an axis parallel cut that splits S∪{p}in dimension i conditioned on not isolating pfrom all points of S (ii) Given a random tree of RRCF(S∪{p}) condi tioned on the fact the first cut isolates pfrom all points ofS the remainder of the tree is a random tree in RRCF(S) 31 Deletion of Points We begin with Algorithm 1which is deceptively simple ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Algorithm 1 Algorithm ForgetPoint 1:Find the node vin the tree where pis isolated in T 2:Letube the sibling of v Delete the parent of v(and of u) and replace that parent with u(ie we short circuit the path from uto the root) 3:Update all bounding boxes starting from u’s (new) par ent upwards – this state is not necessary for deletions but is useful for insertions 4:Return the modified tree T/prime Lemma 4 IfT were drawn from the distribution RRCF(S)then Algorithm 1produces a tree T/primewhich is drawn at random from the probability distribution RRCF(S−{p}) Lemma 5 The deletion operation can be performed in timeO(d)times the depth of point p Observe that if we delete a random point from the tree then the running time of the deletion operation is O(d)times the expected depth of any point Likewise if we delete pointswhose depth is shallower than most points in the tree thenwe can improve the running time of Lemma 5 32 Insertion of Points Given a tree TfromRRCF(S)we produce a tree T /primefrom the distribution RRCF(S∪{p}) The algorithm is pro vided in Algorithm 2 Once again we will couple the deci sions that is mirror the same split in T/primeas inT as long as pis not outside a bounding box in T Up to this point we are performing the same steps as in the construction of the forest on S∪{p} with the same probability Lemma 6 IfT were drawn from the distribution RRCF(S)then Algorithm 1produces a tree T/primewhich is drawn at random from the probability distribution RRCF(S∪{p}) 4 Isolation Forest and Other Related Work 41 The Isolation Forest Algorithm Recall that the isolation forest algorithm uses an ensem ble of trees similar to those constructed in Definition 1 with the modification that the dimension to cut is chosenuniformly at random Given a new point p that algorithm follows the cuts and compute the average depth of the point across a collection of trees The point is labeled an anomalyif the score exceeds a threshold; which corresponds to average depth being small compared to log|S|whereSis suitably sized sample of the data The advantage of the isolation forest is that different di mensions are treated independently and the algorithm is invariant to scaling different dimensions differently However consider the following exampleAlgorithm 2 Algorithm InsertPoint 1:We have a set of points S/primeand a tree T(S/prime) We want to insertpand produce tree T/prime(S/prime∪{p} 2:IfS/prime=∅then we return a node containing the single nodep 3:Otherwise S/primehas a bounding box B(S/prime)=[x/lscript 1xh1]× [x/lscript2xh2]×···[x/lscript dxhd] Letx/lscript i≤xhifor alli 4:For alliletˆx/lscripti=m i n{pix/lscripti}andˆxhi=m a x{xhipi} 5:Choose a random number r∈[0/summationtext i(ˆxhi−ˆx/lscripti)] 6:Thisrcorresponds to a specific choice of a cut in the construction of RRCF(S/prime∪{p}) For instance we can computeargmin{j|/summationtextj i=1(ˆxh i−ˆx/lscripti)≥r}and the cut corresponds to choosing ˆx/lscript j+/summationtextj i=1(ˆxh i−ˆx/lscript i)−rin dimension j 7:If this cut separates S/primeandp(ie is not in the interval [x/lscriptjxhj]) then and we can use this as the first cut for T/prime(S/prime∪{p}) We create a node – one side of the cut is pand the other side of the node is the tree T(S/prime) 8:If this cut does not separate S/primeandpthen we throw away the cut! We choose the exact same dimension as T(S/prime)inT/prime(S/prime∪{p}) and the exact same value of the cut chosen by T(S/prime)and perform the split The point p goes to one of the sides say with subset S/prime/prime We repeat this procedure with a smaller bounding box B(S/prime/prime)of S/prime/prime For the other side we use the same subtree as in T(S/prime) 9:In either case we update the bounding box of T/prime Example 1 ( IRRELEV ANT DIMENSIONS )Suppose we have two clusters of 1000 points each corresponding to x1=±5in the first dimension and xi=0 in all remain ing dimensions i In all coordinates (including x1)w e add a random Gaussian noise with mean 0and standard deviation 001simulating white noise Now consider 10 points with x1=0 and the same behavior in all the other coordinates When d=2 the small cluster of points in the center is easily separated by the isolation forest algorithmwhich treats the dimensions independently When d=3 0 the vast majority of cuts are in irrelevant dimensions andthe algorithm fails (when run on entire data) as shown inFigure 1afor a single trial over 100 trees F or 10trials (for the same data set) the algorithm determined that 430 27014722048244193158 250 and103 points had the same of higher anomaly score than the point with the highest anomaly score among the 10points (the identity of this point varied across the trials) In essence the algorithm either produces too many false alarms or does not have good recall Note that AUC isnot a relevant measure here since the class sizes betweenanomalous and nonanomalous are skewed 1 : 200 The results were consistent across multiple data sets generatedaccording to the example Figure 3bshows a correspond ing single trial using C ODISP() The C ODISP()measure places the 10points in the largest 20values most of the ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams time Example 1 shows that scale independence therefore can be negative feature if distance is a meaningful conceptin the dataset However in many tasks that depend on detecting anomalies the relevance of different dimensions isoften unknown The question of determining the appropriate scale of measurement often has far reaching consequences in data analysis 6 4 2 0 2 4 601 0 0101 0 01 01 02 03 (a) Performance of Isolation Forest (Liu et al 2012) Note that the score never exceeds 03whereas a score of 05corresponds to an outlier Note also that the two clusters are not distinguishable from the 10points near origin outliers in depth values (color) 642 0 2 4 601 0 0101 0 01 0 50 100 150 200 (b) Performance of C ODISP(x Z |Z|) Observe that the clus ters and outliers are separated; some of the extremal points in the clusters have the same (collusive) displacement as the 10points near the origin which is expected Figure 3 The result of running isolation forest and C ODISP()on the input in Example 1ford=3 0 A modified version of the above example also is helpful in arguing why depth of a point is a not always helpful in char acterizing anomalies even in low dimensions Consider Example 2 ( HELD OUTDATA )Consider the same dataset as in Example 1ind=2 dimensions Suppose that we have only sampled 100 points and all the samples correspond to x1=±5 Suppose we now want to evaluate: is the point (00)an anomaly? Based on the samples the natural answer is yes The scoring mechanism of isolation forest algorithm fails because once the two clusters are separated this new point (00)behaves as a point in one of the two other clusters! The situation however changes completely if we include (00)to build the trees The example explains why the isolation forest algorithm is sensitive to sample size However most anomalies are not usually seen in samples – anomaly detection algorithmsshould be measured on held out data Note that Theorem 5 can efficiently solve the issue raised in Example 2 by an swering the contrafactual question of what is the expectedheight has we observed (00)in the sample (without re building the trees) However expected depth seems to gen erate more false alarms as we investigate this issue further in the supplementary material42 Other Related Work The problem of (unsupervised) outlier detection has a rich literature We survey some of the work here; for an extensive survey see (Aggarwal 2013; Chandola et al 2009) and references therein We discuss some of techniqueswhich are unrelated to the concepts already discussed Perhaps the most obvious definition of an anomaly is density based outlier detection which posits that a low probability events are likely anomalous This has led to different approaches based on estimating the density of datasets For points in R nKnorr & Ng (1997; 1998; 1999); Knorr et al (2000) estimate the density by looking at the number of points that are within a ball of radius do fag i v e n data point The lower this number the more anomalous thedata point is This approach may break down when different parts of the domain have different scales To remedythis there a methods (Breunig et al 1999; 2000) that look at the density around a data point compared to its neighborhood A variation of the previous approach is to consider a fixed knumber of nearest neighbors and base the anomaly score on this (Eskin et al 2002; Zhang & Wang 2006) Here the anomaly score is monotonically increasing in the distances to the knearestneighbors Taking the idea of density one step further some authors have looked at finding structure in the data through clustering The intuition here is that for points that cannot easily be assigned toa cluster there is no good explanation for their existenceThere are several clustering algorithms that work well tocluster part of the data such as DBSCAN (Ester et al1996) and STREAM (Guha et al 2003) Additionally FindOut (Y u et al 2002) removes points it cannot clus ter and then recurses Finally the notion of sketching used in this paper is orthogonal to the notion used in ( Huang & Kasiviswanathan 2015) which uses streaming low rank approximation of the data 5 Experiments In the experiments we focus on datasets where anomalies are visual verifiable and interpretable We begin with asynthetic dataset that captures the classic diurnal rhythm ofhuman activity We then move to a real dataset reflecting taxi ridership in New Y ork City In both cases we comparethe performance of RRCF with IF A technique that turns out to be useful for detecting anoma lies in streams is shingling If a shingle of size 4 is passedover a stream the first 4 values of the stream received at timet 1t2t3t4are treated as a 4dimensional point Then at time t5 the values at time t2t3t4t5are treated as as the next fourdimensional point The window slidesover one unit at each time step A shingle encapsulates atypical shape of a curve – a departure from a typical shapecould be an anomaly ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams 51 Synthetic Data Many real datasets implicitly reflect human circadian rhythms For example an eCommerce site may monitorthe number of orders it receives per hour Search enginesmay monitor search queries or ad clicks per minute Content delivery networks may monitor requests per minute Inthese cases there is a natural tendency to expect higher values during the day and lower values at night An anomalymay reflect an unexpected dip or spike in activity In order to test our algorithm we synthetically generated a sine wave where a dip is artificially injected around times tamp 500 that lasts for 20 time units The goal is to deter mine if our anomaly detection algorithm can spot the be ginning and end of the injected anomaly The experimentswere run with a shingle of length four and one hundredtrees in the forest where each tree is constructed with auniform random reservoir sample of 256 points We treatthe dataset as a stream scoring a new point at time t+1 with the data structure built up until time t ! !! " "! # #! $ $! % %! " $ " # ! ""#%% ! $ !# # $ %" % ! $ " # $# " % ! "! #$ % # !" "% $ %! ! $! ! ! # !" !# !$"!%%" " ! " $"! "" "## "% # # "# % (a) The bottom red curve reflects the anomaly score produced by IF Note that the start of the anomaly is missed (b) The bottom red curve represents the anomaly score produced by RRCF Both the beginning and end of the anomaly are caught Figure 4 The top blue curve represents a sine wave with an artifi cially injected anomaly The bottom red curve shows the anomaly score over time In Figure 4a we show the result of running IF on the sine wave For anomalies detecting the onset is critical – and even more important than detecting the end of an anomalyNote that IF misses the start of the anomaly at time 500 The end of the anomaly is detected however by then thesystem has come back to its normal state – it is not useful to fire an alarm once the anomaly has ended Next considerFigure 4bwhich shows the result of running RRCF on the same sine wave Observe that the two highest scoring moments in the stream are the end and the beginning of theanomaly The anomaly is successfully detected by RRCFWhile the result of only a single run is shown the experiment was repeated many times and the picture shown inFigure 4is consistent across all runs 52 Real Life Data: NYC Taxicabs Next we conduct a streaming experiment using taxi rid ership data from the NYC Taxi Commission 2 We con sider a stream of the total number of passengers aggregated over a 30 minute time window Data is collected over a 7month time period from 7/14 – 1/15 Note while this is a1dimensional datasets we treat it as a 48dimensional dataset where each point in the stream is represented by a sliding window or shingle of the last day of data ignoring thefirst day of data The intuition is that the last day of activitycaptures a typical shape of passenger ridership The following dates were manually labeled as anomalies based on knowledge of holidays and events in NYC (Lavin & Ahmad 2015): Independence Day (7/4/147/6/14) Labor Day (9/1/14) Labor Day Parade (9/6/14) NYC Marathon (11/02/14) Thanksgiving (11/27/14) Christmas(12/25/14) New Years Day (1/1/15) North American Blizzard (1/26/151/27/15) For simplicity we label a 30minute window an anomaly if it overlaps one of these days Stream We treat the data as a stream – after observing points1i our goal is to score the (i+1) st point The score that we produce for (i+1) is based only on the pre vious data points 1i but not their labels We use IF as the baseline While a streaming version was subsequently published (Tan et al 2011) since it was not found to im prove over IF ( Emmott et al 2013) we consider a more straightforward adaptation Since each tree in the forest iscreated based on a random sample of data we simply buildeach tree based on a random sample of the stream eg uniform or timedecayed as previously referenced Our aimhere is to compare to the baseline with respect to accuracynot running time Each tree can be updated in an embarrassingly parallel manner for a faster implementation Metrics To quantitatively evaluate our approach we re port on a number of precision/recallrelated metrics We learn a threshold for a good score on a training set and re port the effectiveness on a held out test set The training setcontains all points before time tand the test set all points after time t The threshold is chosen to optimize the F1 measure (harmonic mean of precision and recall) We focus our attention on positive precision and positive recall toavoid “boy who cried wolf” effects (Tsien & Fackler 1997; Lawless 1994) 2http://wwwnycgov/html/tlc/html/about/trip record datashtml ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams T able 1 Comparison of Baseline Isolation Forest to proposed Robust Random Cut Forest Method Sample Positive Positive Negative Negative Accuracy AUC Size Precision Recall Precision Recall IF 256 042 (005) 037 (002) 096 (000) 097 (001) 093 (001) 083 (001) RRCF 256 087 (002) 044 (004) 097 (000) 100 (000) 096 (000) 086 (000) IF 512 048 (005) 037 (001) 097 (001) 096 (000) 094 (000) 086 (000) RRCF 512 084 (004) 050 (003) 099 (000) 097 (000) 096 (000) 089 (000) IF 1024 051 (003) 037 (001) 096 (000) 098 (000) 094 (000) 087 (000) RRCF 1024 077 (003) 057 (002) 097 (000) 099 (000) 096 (000) 090 (000) Method Segment Segment Time to Time to Prec@5 Prec@10 Prec@15 Prec@20 Precision Recall Detect Onset Detect End IF 040 (009) 080 (009) 2268 (305) 2330 (154) 052 (010) 050 (000) 034 (002) 028 (003) RRCF 065 (014) 080 (000) 1353 (205) 1085 (389) 058 (006) 049 (003) 039 (002) 030 (000) T able 2 SegmentLevel Metrics and Precision@K For the finer granularity data in the taxi cab data set we view the ground truth as segments of time when the data is in an anomalous state Our goal is to quickly and reliablyidentify these segments We say that a segment is identified in the test set if the algorithm produces a score over the learned threshold anytime during the segment (including the sliding window if applicable) Results In the experiments there were 200 trees in the forest each computed based on a random sample of 1K points Note that varying the sample size does not alter thenature of our conclusions Since ridership today is likely similar to ridership tomorrow we set our timedecayedsampling parameter to the last two months of ridership Allresults are averaged over multiple runs (10) Standard deviation is also reported Figure 5shows the result of the anomaly scores returned by C ODISP() ! " # $ % & " " " " ! !" ! & # ! & & ! & ! & ! $ ! & $ $ ! & & ! ! " ! $ $ ! % ! ! " ! % ! % ! % ! " ! % ! ! ! % ! " & ! $ ! ! ! ! ! " & ! % & ! ! ! ! # " ! % & ! ! ! ! # " ! & " ! ! ! ! $ ! & " ! # ! ! ! $ ! " # " ! # " # " & " " ! # " $ $ " & " " " " $ $ " $ Figure 5 NYC taxi data and C ODISP() Note that Thanksgiving is not captured In a more detailed evaluation the first set of results (Ta ble1) show that the proposed RRCF method is more accu rate than the baseline Particularly noteworthy is RRCF’s higher positive precision which implies a lower false alarmrate In Table 2 we show the segmentbased results Whereas Table 1may give more credit for catching a long anomaly over a short one the segment metric weighs eachalarm equally The proposed RRF method not only catchesmore alarms but also catches them more quickly The unitsare measured in 30 minute increments – so 11 hours on av erage to catch an alarm on the baseline and 7 hours for theRRCF method These actual numbers are not as important here since anomaly start/end times are labeled somewhat loosely The difference in time to catch does matter Preci sion@K is also reported in Table 2 Discussion: Shingle size if used matters in the sense that shingles that are too small may catch naturally vary ing noise in the signal and trigger false alarms On theother hand shingles that are too large may increase thetime it takes to find an alarm or miss the alarm altogetherTime decay requires knowledge of the domain Sample sizechoice had less effect – with varying sample sizes of 256512 and 1K the conclusions are unchanged on this dataset 6 Conclusions and Future Work We introduced the robust random cut forest sketch andproved that it approximately preserves pairwise distancesIf the data is recorded in the correct scale distance iscrucially important to preserve for computations and notjust anomaly detection We adopted a modelbased def inition of an anomaly that captures the differential effectof adding/removing a point on the size of the sketch Ex periments suggest that the algorithm holds great promisefor fighting alarm fatigue as well as catching more missedalarms We believe that the random cut forest sketch is more bene ficial than what we have established For example it may also be helpful for clustering since pairwise distances areapproximately preserved In addition it may help detectchangepoints in a stream A changepoint is a moment in timetwhere before time tthe data is drawn from a distri butionD 1and after time tthe data is drawn from a distri butionD2 andD1is sufficiently different from D2(Kifer et al 2004; Dasu et al 2006) By maintaining a sequence of sketches over time one may be able to compare two sketches to determine if the distribution has changed ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Acknowledgments We thank Roger Barga Charles Elkan and Rajeev Rastogi for many insightful discussions We also thank Dan BlickPraveen Gattu Gaurav Ghare and Ryan Nienhuis for theirhelp and support References Aggarwal Charu C Outlier Analysis Springer New Y ork 2013 Bentley Jon Louis Multidimensional binary search trees used for associative searching Commun ACM 18(9): 509–517 September 1975 ISSN 00010782 Breiman Leo Random forests Machine Learning pp 5–32 2001 Breunig Markus M Kriegel HansPeter Ng Raymond T and Sander J ¨org Opticsof: Identifying local outliers InPKDD pp 262–270 1999 Breunig Markus M Kriegel HansPeter Ng Raymond T and Sander J ¨org Lof: identifying densitybased local outliers In ACM sigmod record volume 29 pp 93–104 2000 Chandola V arun Banerjee Arindam and Kumar Vipin Anomaly detection: A survey ACM computing surveys (CSUR) 41(3):15 2009 Charikar Moses Chekuri Chandra Goel Ashish Guha Sudipto and Plotkin Serge Approximating a finite metric by a small number of tree metrics Proceedings of F oundations of Computer Science pp 379–388 1998 Dasu Tamraparni Krishnan Shankar V enkatasubrama nian Suresh and Yi Ke An informationtheoretic ap proach to detecting changes in multidimensional data streams In In Proc Symp on the Interface of Statistics Computing Science and Applications Citeseer 2006 Efraimidis Pavlos S and Spirakis Paul G Weighted ran dom sampling with a reservoir Information Processing Letters 97(5):181–185 2006 Emmott Andrew F Das Shubhomoy Dietterich Thomas Fern Alan and Wong WengKeen Systematic construction of anomaly detection benchmarks from realdata In ACM SIGKDD Workshop on Outlier Detection and Description pp 16–21 2013 Eskin Eleazar Arnold Andrew Prerau Michael Portnoy Leonid and Stolfo Sal A geometric framework for unsupervised anomaly detection In Barbar ´a Daniel and Jajodia Sushil (eds) Applications of Data Mining in Computer Security pp 77–101 Boston MA 2002Ester Martin Kriegel HansPeter Sander J ¨org and Xu Xiaowei A densitybased algorithm for discoveringclusters in large spatial databases with noise In KDD volume 96 pp 226–231 1996 Finkel R A and Bentley J L Quad trees a data structure for retrieval on composite keys Acta Informatica 4(1): 1–9 1974 Guha Sudipto Meyerson Adam Mishra Nina Mot wani Rajeev and O’Callaghan Liadan Clustering data streams: Theory and practice IEEE Trans Knowl Data Eng 15(3):515–528 2003 Guttman Antonin Rtrees: A dynamic index structure for spatial searching In SIGMOD pp 47–57 1984 Huang Hao and Kasiviswanathan Shiva Prasad Stream ing anomaly detection using randomized matrix sketch ing Proceedings of the VLDB Endowment 9(3):192– 203 2015 Johnson William B and Lindenstrauss Joram Extensions of lipschitz mappings into a hilbert space Contemporary Mathematics 26 Providence RI: American Mathemati cal Society 1984 Kifer Daniel BenDavid Shai and Gehrke Johannes De tecting change in data streams In VLDB pp 180–191 2004 Knorr Edwin M and Ng Raymond T A unified notion of outliers: Properties and computation In KDD pp 219– 222 1997 Knorr Edwin M and Ng Raymond T Algorithms for min ing distancebased outliers in large datasets In VLDB pp 392–403 1998 Knorr Edwin M and Ng Raymond T Finding intensional knowledge of distancebased outliers In VLDB vol ume 99 pp 211–222 1999 Knorr Edwin M Ng Raymond T and Tucakov Vladimir Distancebased outliers: algorithms and applications VLDB Journal 8(34):237–253 2000 Lavin Alexander and Ahmad Subutai Evaluating realtime anomaly detection algorithmsthe numentaanomaly benchmark arXiv:151003336 2015 Lawless Stephen T Crying wolf: false alarms in a pedi atric intensive care unit Critical care medicine 22(6): 981–985 1994 Lindvall T Lectures on the coupling method Wiley New Y ork 1992 Liu Fei Tony Ting Kai Ming and Zhou ZhiHua Isolationbased anomaly detection ACM Trans Knowl Discov Data 6(1):3:1–3:39 March 2012 ArchivedRobust Random Cut Forest Based Anomaly Detection On Streams Tan Swee Chuan Ting Kai Ming and Liu Fei Tony Fast anomaly detection for streaming data In IJCAI pp 1511–1516 2011 Tsien Christine L and Fackler James C Poor prognosis for existing monitors in the intensive care unit Critical care medicine 25(4):614–619 1997 Vitter Jeffrey S Random sampling with a reservoir ACM Transactions on Mathematical Software 11(1): 3757 1985 Y u Dantong Sheikholeslami Gholamhosein and Zhang Aidong Findout: finding outliers in very large datasets Knowledge and Information Systems 4(4):387–412 2002 Zhang Ji and Wang Hai Detecting outlying subspaces for highdimensional data: the new task algorithms andperformance Knowledge and information systems1 0 (3):333–355 2006 Archived
|
General
|
consultant
|
Best Practices
|
Running_Adobe_Experience_Manager_on_AWS
|
Running Adobe Experience Manager on AWS First published July 2016 Updated November 25 202 0 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why use AEM on AWS? 1 Adobe Experien ce Manager Overview 3 AEM Platform Overview 3 Repositories 4 AEM Implementation on AWS 6 Self or Partner Managed Deployment 6 AEM Managed Services 6 Architecture Options 7 Reference Architecture 7 Reference Ar chitecture Components 7 AEM OpenCloud 11 Security 15 Compliance and GovCloud 17 Digital Asset Management 18 Automate d Deployment 18 Automated Operations 19 Additional AWS Services 20 Conclusion 20 Contributors 20 Further Reading 21 Document Revisions 21 Abstract This whitepaper outlines the benefits and strategy for hosting for Adobe Experience Manager ( AEM ) on Amazon Web Services ( AWS ) It discusses various migration strategies architecture choices and deployment strategies including a reference architecture for self hosting on AWS It also provides guidance for disaster recovery DevO ps and high compliance workloads su ch as government finance and healthcare This whitepaper is for technical leaders and business leaders responsible for deploying and managing AEM on AWS Amazon Web Services Running Adobe Experience Manager on AWS 1 Introduction Delivering a fast secure and seamless experience is essential i n today’s digital marketing environment The need to reach a broader audience across all devices is essential and a shorter time to market can be a differentiator Companies are turning to cloud based solutions to boost business agility harness new oppor tunities and gain cost efficiencies Adobe Experience Manager (AEM) is a comprehensive content management solution for building websites mobile apps and forms AEM makes it easy to manage your marketing content and assets Adopting AWS for running AEM presents many benefits such as increased business agility added flexibility and reduced costs This whitepaper provides technical guidance for running AEM on AWS With any deployment on AWS there are many different considerations and options so your approach might be different from the approach we walk through in this paper Lastly th is whitepaper concludes by discussing security and compliance architectural components connectivity and a strategy you can employ for migration Why use AEM on AWS? Hosting AEM on AWS offers some key benefits such as global capacity security reliability fault tolerance programmability and usability This section discusses several ways in which deploying AEM on AWS is different from deploying it to an onpremises infrastructure Flexible Capacity One of the benefits of using the AWS Cloud is the ability to scale up and down as needed When using AEM you have full freedom to scale all of your environments quickly and cost effectively giving you opportu nities to establish new development quality assurance (QA) and performance testing environments AEM is frequently used in scenarios that have unknown or significant variations in traffic volume The on demand nature of the AWS platform allows you to sca le your workloads to support your unique traffic peaks during key events such as holiday shopping seasons major sporting events and large sale events Amazon Web Services Running Adobe Experience Manager on AWS 2 Flexible capacity also streamlines upgrades and deployments AWS makes it very easy to set up a paral lel environment so you can migrate and test your application and content in a production like environment Performing the actual production upgrade itself can then be as simple as the change of a domain name system (DNS) entry Broad Set of Capabilities As a leading web content management system solution AEM is often used by customers as the foundation of their digital marketing platform Running AEM on AWS provides customers with the benefits of easily integrating third party solutions for auxiliary expe riences such as blogs and provid ing additional tools for supporting mobile delivery analytics and big data management You can integrate the open and extensible APIs of both AWS and AEM to create powerful new combinations for your firm Also AEM can be used to augment or create headless commerce architectures seamlessly With services like Amazon Simple Notification Service ( Amazon SNS) Amazon Simple Queue Service ( Amazon SQS) and AWS Lambda AEM functionality can easily be integrated with other third party functionalit ies in a decoupled fashion AWS can also provide a clean manageable and auditable approach to decoupled integration with backend systems such as Customer Relationship Management (CRM) and commerce systems Benefits of Cloud and Global Availability Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure (Capex) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large and upfront sunk costs ( that is costs that have already been incurred and can’t be recovered) AWS helps to lower customer costs through its pay forwhat youuse pricing model Also as of writing AWS Global Infrastructure spans 24 geographic regions around the world enabling customers to deploy on a global footprint quickly and easily Security and High Compliance Workloads Using AWS you will gain the control and confiden ce you need to safely run your business with the most flexible and secure cloud computing environment available today With AWS you can improve your ability to meet core security and compliance requirements with a comprehensive set of services and feature s The AWS Compliance Amazon Web Services Running Adobe Experience Manager on AWS 3 Program s will help you understand the robust controls in place at AWS to maintain security and compliance in the cloud Compliance certifications and attestations are assesse d by a third party independent auditor Running AEM on AWS provides customers with the benefits of leveraging the compliance and security capabilities of AWS along with the ability to monitor and audit access to AEM using AWS Security Identity and Compliance services AWS also offers the GovCloud (US) Regions which are designed to host sensitive data regulate workloads and address the most str ingent US government security and compliance requirements Adobe Experience Manager Overview This section highlights some of the key technical elements for AEM and offers some best practice recommendations This whitepaper focuses on AEM 65 (released April 2019) AEM Platform Overview A standard AEM architecture consists of three environments: author publish and dispatcher Each of these environments consists of one or more instances Figure 1 – Sample AEM Architecture The author environment is used for crea ting and managing the content and layout of an AEM experience It provides functionality for reviewing and approving content updates and publishing approved versions of content to the publish environment Amazon Web Services Running Adobe Experience Manager on AWS 4 The publish environment delivers the experience to the intended audience It renders the actual pages with an ability to personalize the experience based on audience characteristics or targeted messaging The author and publish instances are Java web applications that have identical installed software T hey are differentiated by configuration only The dispatcher environment is a caching and/or load balancing tool that helps realize a fast and dynamic web authoring environment For caching the dispatcher works as part of an HTTP server such as Apache HTTP Server with the aim of storing (or caching) as much of the static website content as possible and accessing the website's publisher layout engine as infrequently as possible For cachin g the dispatcher module uses the web server's ability to serve static content The dispatcher places the cached documents in the document root of the web server Repositories Within AEM everything is content and stored in the underlying repository AEM’s repository is called CRX it imple ments the Content Repository API for Java ( JCR) and it is based on Apache Jackrabbit Oak Figure 2 – AEM Storage Options The Oak storage layer provides an abstraction layer for the actual storage of the content MicroKernels act as persistence managers in AEM There are two primary storage implementations available in AEM 6: Tar Storage and MongoDB Storage The Tar storage uses tar files It stores the content as various types of records within larger segments Journals are use d to track the latest state of the repository The MongoDB Amazon Web Services Running Adobe Experience Manager on AWS 5 storage leverages MongoDB for sharding and clustering The repository tree is kept in one MongoDB database where each node is a separate document At a high level Tar MicroKernel (TarMK) is used f or performance and MongoDB is used for scalability Publish instances are always TarMK Multiple publish instances with each instance running its own TarMK are referred to as TarMK farm This is the default deployment for publish environments Author instances can either use TarMK for a single author instance or MongoDB when horizontal scaling is required For TarMK author instance deployments a cold standby TarMK instance can be configured in another availability zone to provide backup in case the primary author instance fails although the failover is not automatic TarMK is the default persistence system in AEM for both author and publish configurations Although AEM can be configured to use a different persistence system (such as MongoDB ) TarMK is performance optimized for typical JCR use cases and is very fast TarMK uses an industry standard data format that can be quickly and easily backed up providing high performance and reliable data storage with minimal operational overhead and lower total cost of ownership (TCO) MongoDB is recommended for AEM author deployments when there are more than 1000 unique users per day 100 concurrent users or high volumes of page edits (For details r efer to When to use Mongo DB ) MongoDB provides high availability redundancy and automated failovers for author instances although performance can be lower than TarMK A minimum deployment with MongoDB typically involves a MongoDB replica consisting of one primary node and two secondary nodes with each node running in its separate availability zone In AEM binary data can be stored independently from the content nodes The binary data is stored in a data store whereas content nodes are stored in a node store You can use Amazon Simple Storage Service (Amazon S3) as a shared datasto re between publish and author instances to store binary files This approach makes the cluster high performant For details see How to configure S3 as a datastore Amazon Web Services Running Adobe Experience Manager on AWS 6 AEM Implementation on AWS This section outline s the following two deployment options and the key design elements to consider for deploying AEM on AWS • Self or partner managed deployment • AEM Managed Services by Adobe Self or Partner Managed Deployment In a self managed deployment the organization itself is responsible for the deployment and maintenance of AEM and the underlying AWS infrastructure In partner managed deployment the organizat ion engages with a partner from the AWS Partner Network (APN) for the deployment and maintenance of AEM and the underlying AWS infrastructure AEM customizations in both models can be done by the organizatio n or the partner For organizations who cannot manage their own deployment of AEM on AWS (either because they do not have the resources or because they are not comfortable) there are several APN partners that specialize in providing managed hosting deploy ments of AEM on AWS These companies take care of all aspects of deploying securing patching and maintaining AEM Some partners also provide design services and custom development for AEM You can use AWS Partner Finder to find and compare providers that specialize in Adobe products on AWS AEM Managed Services AEM Managed Services by Adobe enables customers to launch faster by deploying on the AWS cloud and also by leaning on best practices and support from Adobe Organizations and business users can engage customers in minimal time drive market share and focus on creating innovative marketing campaigns while reducing the burden on IT Cloud Manager part of the AEM Managed Services offering is a self service portal that further enables organizations to self manage AEM Manager in the cloud It includes a continuous integration and continuous delivery (CI/CD) pipeline that lets IT teams and implement ation partners speed up the delivery of customizations or updates without compromising performance or security Cloud Manager is only available for Adobe Managed Service customers Amazon Web Services Running Adobe Experience Manager on AWS 7 Architecture Options This section present s a reference architecture for run ning AEM on AWS along with various architectural options to consider when planning AEM on AWS deployment Alternately you can also consider adopt ing AEM OpenCloud an open source framework for running AEM on AWS Reference Architecture The following reference architecture is recommended for both self or partner managed deployment methods For reference architecture details see Hosting Adobe Experience Manager on AWS Figure 3 –AEM on AWS Reference Architecture Reference Architecture Components Architecture Sizing For AEM the right instance type depends on th e usage scenario For AEM author and publish instances in the most common publishing scenario a solid mix of memory Amazon Web Services Running Adobe Experience Manager on AWS 8 CPU and I/O performance is necessary Therefore the Amazon EC2 General Purpose M5 family of instances are good candidate s for these environments depending upon sizing Amazon EC2 M5 Instan ces are the next generation of the Amazon EC2 General Purpose compute instances M5 instances offer a balance of compute memory and networking resources for a broad range of workloads Additionally M5d M5dn and M5ad instances have local storage offer ing up to 36TB of NVMe based SSDs AEM Dispatcher is installed on a web server (Apache httpd on Amazon EC2 instance ) and it is a key caching layer It provides caching load balancing and application security Therefore sizing memory and compute is im portant but optimization for I/O is critical for this tier Amazon Elastic Block Store ( Amazon EBS) I/O optimized volumes are recommended Each dispatcher instance is mapped to a publish instance in a one toone fashion in each availability zone For all of these instances Amazon EBS optimization is important EBS volumes on which AEM is installed should use either General Purpose SSD (GP2) volumes or provisioned Input/ Output operations Per Second (IOPS) volumes This configuration provides a specific level of performance and lower latency for operations Adobe recommends Intel Xeon or AMD Opteron CPU with at least 4 cores and 16 GB of RAM for AEM environments This translates to Amazon EC2 M5XL instance type Typically you can start with Amazon EC2 M52XL instance type and then adjust based on your workload needs For guidance on selecting the right instance r efer to the Adobe hardware sizing guide The specific sizing for the number of servers you need depends on your AEM use case (for example experience management or digital asset management) and the level of caching that should be applied At minimum you need five total servers for a high availability configuration utilizing two Availability Zones This architecture place s a dispatcher publi sher pair in each of the two Availability Zones and a single author node in one Availability Zone (fronting each of the publish instances with a dispatcher instance) For guidelines for calculating the number of servers required refer to the Adobe support site Load Balancing In an AEM setup Elastic Load Balancing is configured to balance traffic to the dispatchers By default a load balancer distributes incoming requests evenly across its enabled Availability Zo nes (AZs) To ensure that a load balancer distributes incoming Amazon Web Services Running Adobe Experience Manager on AWS 9 requests evenly across all back end instances (regardless of the Availability Zone that they are in ) enable cross zone load balancing For authenticated AEM experiences authentication is main tained by a login token When a user logs in the token information is stored under the tokens node of the corresponding user node in the repository The value of the token ( that is the session ID) is also stored in the browser as a cookie named login token In this case the load balancer should be configured for sticky sessions routing requests with the login token cookie to the same instance AEM can be configured to recognize the authentication cookie across all publish instance s However it also req uires that all relevant user session information ( for example a shopping cart) is available across all publish instances Elastic Load Balancing can be used in front of the dispatchers to provide a Single CNAME URL for the application The load balancer in conjunction with AWS Certificate Manager can be used to provide an HTTPS access and to offload SSL By using the load balancer you can further secure your website deployment by moving the publisher instances into a private subnet allowing access from only the load balancer The load balancer can also translate the port access from port 80 to the default publish port 4503 High Availability For a highly available AEM architecture the architecture should be set up to leverage AWS strengths Configure e ach instance in the AEM cluster for Amazon EC2 Auto Recovery Additionally when the clu ster is built in conjunction with a load balancer you can use AWS Auto Scaling to automatically provision nodes across multiple Availability Zones We recommend that you provision nodes across multiple Availability Zones for high availability and use multiple AWS Regions to address global deployment considerations as needed In a multi Region deployment you can set up Amazon Route 53 to perform DNS failover based on health checks Scaling A simple way to accomplish scaling is to create separate Amazon Machine Images (AMIs) for the publish instance dispatcher instance (mapped to publish) and dispatcher instance (mapped to author if in use) Three separate launch configurations can be created using these AMIs and included in separate Auto Scaling groups Newly launched dispatcher instances require a corresponding publish instance and need to author instances to receive future invalidation calls AWS Lambda can provide scaling logic in response to scale up/down events from Auto Scaling groups The Amazon Web Services Running Adobe Experience Manager on AWS 10 scaling logic consists of pairing/unpair ing the newly launched dispatcher instance to an available publish instance (or the other way around ) updat ing the replication agent (reverse replication if applicable) between the newly launched publish instance and author instance and updat ing AEM content health check alarms Each d ispatcher instance is mapped to a publish instance in a one toone fashion in separate availability zone s For faster startup and synchronization you can place the AEM installation on a separate Amazon EBS volume By taking frequent snapshots of the volume and attaching those snapshots to the newly launched instances the need to repl icate large amounts of data from the author can be cut down In the startup process the publish instance can then trigger author —publish replication to fully ensure the latest content Content Delivery AEM can use a content delivery network (CDN) such as Amazon CloudFront as a caching layer in addition to the standard AEM dispatcher When you use a CDN you need to consider how content is invalidated and refreshed in the CDN when content is updated Explicit configuration regarding how long particular resources are held in the CloudFront cache along with expiration and cache control headers sent by dispatcher can help in controlling the CDN cache Cache control headers can be controlled by using the mod_expires Apache Module For API based invalidation associated with content replication o ne approach is to build a custom invalidation workflow and set up an AEM Replication Agent that will use your own ContentBuilder and TransportHandler to invalidate the Amazon CloudFront cache using API For more details r efer to Using Dispatcher with a CDN Dynamic Content The dispatcher is the caching layer with the AEM product It allows for defining caching rules at the web server layer To realize the full benefit of the dispatcher pages should be fully cacheable Any element that isn’t cacheable will “break” the cache functionality To incorporate dynamic elements in a static page the recommended approach is to use client side JavaScript Edge Side Includes (ESI s) or web server level Server Side Includes (SSI s) Within an AWS environment ESIs can be configured using a solution such as Varnish replacing the dispatcher However using such configuration may not be supported by Adobe Amazon Web Services Running Adobe Experience Manager on AWS 11 Amazon S3 Data Store Binary data can be stored independently from the content nodes in AEM When deploying on AWS the binary data store can be Amazon S3 simplifying management and backups Also the binary data store can then be shared across author instances and even betwee n author and publish instances reducing overall storage and data transfer requirements Refer to Amazon S3 Dat a Store documentation by Adobe to learn how to configure S3 for AEM AEM OpenCloud AEM OpenCloud is an open source platform for running AEM on AWS It provides an outofthebox solution for provisioning a highly available AEM architecture which implements auto scaling auto recovery chaos testing CDN multi level backup blue green deployment repository upgrade security and monitoring capab ilities by leveraging a multitude of AWS services AEM OpenCloud code base is open source and available on GitHub with an Apache 2 license The code base is maintained by Shine Solutions Group an APN Partner You are free to use AEM OpenCloud on your own or engage with the Shine Solution s Group for custom use cases and implementation support AEM OpenCloud supports multiple AEM versions from 62 to 65 using Amazon Linux 2 or RHEL7 operating system with two architecture options: fullset and consolidated This platform can also be built and run in multiple AWS Regions It is highly configurable and provides a number of customization points where users can provision various other software into their AEM environment provisioning automation AEM OpenCloud is available through the AEM OpenCloud on AWS Quick Start an architecture based on AWS best practices you easily launch in a few clicks AEM OpenCloud FullSet Architecture A fullset architecture is a full featured environment suitable for production and staging environments It includes AEM Publish Author Dispatcher and Publish Dispatcher EC2 instances within Auto Scaling groups which (combined with an Orche strator application ) provide the capability to manage AEM capacity as the instances scale out and scale in corresponding to the load on the Dispatcher instances Orchestrator application manages AEM replication and flush agents as instances are created and terminated This architecture also includes chaos testing capability by using Netflix Chaos Monkey which can be configured to randomly terminate either one of those instances within the Amazon Web Services Running Adobe Experience Manager on AWS 12 autoscaling groups or allow the architecture to live in production continuously verifying that AEM OpenCloud can automatically recover from failure AEM Author Primary and Author Standby are managed separately where a failure on Author Primary instance can be mitiga ted by promoting an Author Standby to become the new Author Primary as soon as possible while a new environment is being built in parallel and will take over as the new environment replacing the one which lost its Author Primary Fullset architecture us es Amazon CloudFront as the CDN sitting in front of AEM Publish Dispatcher load balancer providing global distribution of AEM cached content Fullset offers three types of content backup mechanisms: AEM package backup live AEM repository EBS snapshots (taken when all AEM instances are up and running ) and offline AEM repository EBS snapshots (taken when AEM Author and Publish are stopped ) You can u se any of these backups for blue green deployment providing the capability to replicate a complete environment or to restore an environment from any point of time Figure 4 – AEM OpenCloud Full Set Architecture Amazon Web Services Running Adobe Experience Manager on AWS 13 On the security front this architecture provides a minimal attack surface with one public entry point to either Amazon CloudFront distribution or an AEM Publish Dispatcher load balancer whereas the other entry point is for AEM Author Dispatcher load balancer AEM OpenCl oud supports encryption using AWS Key Management Service (AWS KMS ) keys across its AWS resources The f ullset architecture also includes a n Amazon CloudWatch Monitoring Dashboard which visualizes the capacity of AEM Author Dispatcher Author Primary Author Standby Publish and Publish Dispatcher along with their CPU memory and disk consumptions Amazon CloudWatch Alarms are also configured across the most important AWS resources allow ing notification mechanism via an SNS topic Consolidated Architecture A consolidated architecture is a cut down environment where an AEM Author Primary an AEM Publish and an AEM Dispatcher are all running on a single Amazon EC2 instance This architecture is a low cost alternative suitable for development and testing environments This architecture also offers those three types of backup just like fullset architecture where the backup AEM package and EBS snapshots are interchangeable between consolidated and fullset environments This option is useful for example when you want to restore production backup from a fullset environment to multiple development environments running consolidated architecture Another example is if you want ed to upgrade an AEM repository to a newer version in a development environment which is then pushed through to testing staging and eventua lly production Amazon Web Services Running Adobe Experience Manager on AWS 14 Figure 5 – AEM OpenCloud Consolidated Architecture Environment Management To manage multiple environments with a mixture of fullset and consolidated architectures AEM OpenCloud has a Stack Manager that handles the command executions within AEM instances via AWS Systems Manager These commands include taking backups checking environment readiness running the AEM security checklist enabling and disabling CRX DE and SAML deploying multiple AEM packages configured in a descriptor flushing AEM Dispatcher cache and promoting the AEM Author Standby instance to Primary Other than the Stack Manager there is also AEM OpenCloud Manager which currently provides Jenkins pipelines for creating and terminating AEM fullset and consolidated architectures baking AEM Amazon Machine Images (AMIs) executing operational tasks via Stack Manager and upgrading an AEM repository between versions (for example from AEM 62 to 64 or from AEM 64 to 65 ) Amazon Web Services Running Adobe Experience Manager on AWS 15 Figure 6 – AEM OpenCloud Stack Manager Security The security of the A EM hosting environment can be broken down into two areas: application security and infrastructure security A crucial first step for application security is to follow the Security Checklist for AEM and the Dispatcher Security Checklist These checklists cov er various parts of security considerations from running AEM in production mode to using mod_rewrite and mod_security modules from Apache to prevent Distributed Denial of Service ( DDoS) attacks and cross site scripting ( XSS) attacks From an infrastructure level AWS provides several security services to secure your environment These services are grouped into five main categories – network security; data protection; access control; d etection audit monitoring and logging ; and incident response Networ k Security One of the core components of network security is Amazon V irtual Private Cloud (Amazon VPC) This service provides multiple layers of network security for your application such as public and private subnets security groups and network access Amazon Web Services Running Adobe Experience Manager on AWS 16 control lists for subnet s Also VPC endpoints for S3 enable you to privately connect your VPC to Amazon S3 Amazon CloudFront can offload direct access to your backend infrastructure and using the Web Application Firewall (WAF) provided by the AWS WAF service you can apply rules to prevent the application from getting compromised by scripted attacks The same r ules that are encoded in Apache mod_security on the dispatcher can be moved or replicated in AWS WAF Since AWS WAF integrates with Amazon CloudFront CDN this enables earlier detection minimizing overall traffic and impact AWS WAF provides centralized c ontrol automated administration and real time metrics Additionally AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield pr ovides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection There are two tiers of AWS Shield : Standard and Advanced All AWS custome rs benefit from the automatic protections of AWS Shield Standard at no additional charge Data Protection Organizations should encrypt data at rest and in transit AEM provides SSL wizard to easily configure SSL certificates AWS data protection services provide encryption and key management and threat detection that continuously monitors and protects your AWS infrastructure For exam ple AWS Certificate Manager can p rovision manage and deploy public and private SSL/TLS certificates ; AWS KMS can help with Key storage and management ; and Amazon Macie can d iscover and protect your sensitive data at scale Access Control AWS Identity & Access Management (IAM) helps securely manage access to AWS services and resources In addition AWS provides identity services to connect your on prem directory service or use AWS Directory Service as a managed Microsoft Active Directory to provide access to AEM infrastructure as needed within your organization Detection Audit Monitoring and Logging Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads With AWS Security Hub you have a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services Amazon Web Services Running Adobe Experience Manager on AWS 17 such as Amazon GuardDuty Amazon Inspector and Amazon Macie as well as f rom APN Partner solutions AWS also provides audit tools such as AWS Trusted Advisor which inspects your AWS environment and makes recommendations for cost saving improving system performance and reliability and security Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Insp ector produces a detailed report with prioritized steps for remediation This can support system management and gives security professionals the necessary visibility into vulnerabilities that need to be fixed In addition to Amazon Inspector you can use other third party products such as Burp Suite or Qualys SSL Test (for certificate validation Finally havi ng an audit log of all API actions and configuration changes can be useful in determining what changed and who changed it AWS CloudTrail and AWS Config provide you with the capability to capture extensive audit logs We recommend that you enable these services in your hosting environment Incident Response AWS provides services such as AWS Lambda and AWS Config Rules which can evaluate whether your AWS resources comply with your desir ed settings and set them back into compliance or notify you Amazon Detective is another service that simplifies the process of investigating security findings and identifying the root cause Amazon Detecti ve analyzes events from multiple data sources such as VPC Flow Logs AWS CloudTrail logs and Amazon GuardDuty findings and automatically creates a graph model that provides you with a unified interactive view of your resources users and the interactions between them over time Compliance and GovCloud The AWS GovCloud (US) gives government customers and their partners the flexibility to architect secure cloud solutions that comply with many compliance programs (FedRAMP High FISMA DoD SRG ITAR and CJIS to name a few) AWS GovCloud (USEast) and (US West) Regions are operated by employees who are US citizens on US soil AWS GovCloud (US) is only accessible to US entities and root account holders who pass a screening process Service mapping t o compliance programs is detailed on the AWS Services in Scope by Compliance Program page Amazon Web Services Running Adobe Experience Manager on AWS 18 Digital Asset Management AEM includes a Digital Asset Management (DAM) solution called AEM Asset s AEM assets enables your enterprise users to manage and distribute digital assets such as images videos documents audio clips 3D files and rich media When planning for your AWS architecture you should evaluate the potential use of the AEM Assets solution as part of your planning With AEM Assets the number of large files usually increases and often involves resource intensive processes such as image transformations and renditions Various architecture best practices should be considered depending on the scenario and they are described in detail in Best Practices for Assets Automated Deployment AWS provides API access to all AWS servi ces and Adobe does this for AEM as well Many of the various commands to deploy code or content or to create backups can be invoked through an HTTP service interface This allows for a very clean organization of the continuous integration and deployment process with the use of Jenkins as a central hub invoking AEM functionality through CURL or similar commands Jenkins can support manual scheduled and triggered deployments and can be the central point for your AEM on AWS deployment If necessary you can enable additional automation using Jenkins with AWS CodeBuild and AWS CodeDeploy enabling the creation of a complete environment from the Jenkins console Refer to Set up a Jenkins Build Server on AWS to set up Jenkins Amazon Web Services Running Adobe Experience Manager on AWS 19 Figure 7 – Example CI Setup for an AEM Jenkins Architecture Automated Operations One of the key benefits of running AEM on AWS is the str eamlined AEM Operations process To provision instances AWS CloudFormation or AWS OpsWorks can be leveraged to fully automate the deployment process fro m setting up the architecture to provisioning the necessary instances Using the AWS CloudFormation embedded stacks functionality scripts can be organized to support the different architectures outlined in the earlier sections Also AEM OpenCloud manager provides automated operations functionality out of the box with little effort When using AEM’s Tar Storage repository content is stored on the file system To create an AEM backup you must create a file system snapshot You can make a file system snapshot on AWS through Amazon Data Lifecycle Manager Alternately you can create a centralized b ack up plan using AWS Backup You should use Amazon Data Lifecycle Manager when you want to automate the creation retention and deletion of EBS snapsh ots You should use AWS Backup to manage and monitor backups across the AWS services you use including EBS volumes from a single place Lastly review the best practices and checks (such as log file monitoring AEM performance monitoring and Replication Agent monitoring ) outlined in the Monitoring and Maintaining AEM guide to ensure smooth operations of your AEM environment Amazon Web Services Running Adobe Experience Manager on AWS 20 Additional AWS Services You can use additional services and capabilities from both AWS and the AEM platform to add further value to your AEM deployment on AWS With AEM you can integrate with a variety of thirdparty services outofthe box as well as Amazon SNS for mobile notifications relating to changes to the AEM environment AEM offers tools to manage targeting within experiences delivered through the solution Adobe also has complementary products (which integrate well with AEM ) that further personalize and target the experience for customers Combined with AWS services such as Amazon Personalize Amazon Kinesis and AWS Lambda you can create a powerful targeting engine to deliver onetoone personalization Conclusion This paper presented the business and technology drivers for running AEM on AWS along with the strategies and considerations Running AEM on AWS provides a secure and scalable foundation for delivering great digital ex periences for customers As you prepare for your AEM migration to AWS we recommend that you consider the guidance outlined in this document Contributors Contributors to this document include : • Anuj Ratra Sr Solutions Architect Amazon Web Services • Cliffano Subagio Principal Engineer Shine Solutions Group • Michael Bloch Senior DevOps Engineer Shine Solutions Group • Matthew Holloway Manager Solutions Architects Amazon Web Services • Pawan Agnihotri Sr Mgr Solution Architecture Amazon Web Services • Martin Jacobs GVP Technology Razorfish Amazon Web Services Running Adobe Experience Manager on AWS 21 Further Reading For additional information see: • Hosting Adobe Experience Manager on AWS Reference Architecture Document Revisions Date Description November 2020 Updated Reference Architecture for AEM 65 Added AEM OpenC loud framework as an alternative option July 2016 First publication
|
General
|
consultant
|
Best Practices
|
Running_Containerized_Microservices_on_AWS
|
Running Containerized Microservices on AWS First Published November 1 2017 Updated August 5 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Componentization Via Services 2 Orga nized Around Business Capabilities 4 Products Not Projects 7 Smart Endpoints and Dumb Pipes 8 Decentralized Governance 10 Decentralized Data Management 12 Infrastructure Automation 14 Design for Failure 17 Evolutionary Design 20 Conclusion 22 Contributors 23 Document Revisions 23 Abstract This whitepaper is intended for architects and developers who want to run containerized applications at scale in production on Amazon Web Services (AWS ) This document provides guidance for application lifecycle management security and architectural soft ware design patterns for container based applications on AWS We also discuss architectural best practices for adoption of containers on AWS and how traditional software design patterns evolve in the context of containers We leverage Martin Fowler’s prin ciples of microservices and map them to the twelve factor app pattern and real life considerations This whitepaper gives you a starting point for building microservices using best practices and software design patterns Amazon Web Services Running Containerized Microservices on AWS 1 Introduction As modern microservice sbased applications gain popularity containers are an attractive building block for creat ing agile scalable and efficient microservices architectures Whether you are considering a legacy system or a greenfield appli cation for containers there are well known proven software design patterns that you can apply Microservices are an architectural and organizational approach to software development in which software is composed of small independent services that commun icate to each other There are different ways microservices can communicate but the two commonly used protocols are HTTP request/response over w elldefined APIs and lightweight asynchronous messaging1 These services are owned by small selfcontained t eams Microservices architectures make applications easier to scale and faster to develop This enabl es innovation and accelerat es timetomarket for new features Containers also provide isolation and packaging for software and help you achieve more deployment velocity and resource density As proposed by Martin Fowler2 the characteristics of a microservices architecture include the following : • Componentization via services • Organized ar ound business capabilities • Products not projects • Smart endpoints and dum b pipes • Decentralized governance • Decentralized data management • Infrastructure automation • Design for failure • Evolutionary design These characteristics tell us how a microservices archit ecture is supposed to behave To help achieve these characteristics many development teams have adopted the twelve factor app pattern methodology The twelve factors are a set of best practices for building modern app lications that are optimized for cloud computing The twelve factors cover four key areas: deployment scale portability and architecture : Amazon Web Services Running Containerized Microservices on A WS 2 1 Codebase One codebase tracked in revision control many deploys 2 Dependencies Explicitly declare and isolate dep endencies 3 Config Store configurations in the environment 4 Backing services Treat backing services as attached resources 5 Build release run Strictly separate build and run stages 6 Processes Execute the app as one or more stateless processes 7 Port bind ing Export services via port binding 8 Concurrency Scale out via the process model 9 Disposability Maximize robustness with fast startup and graceful shutdown 10 Dev/prod parity Keep development staging and production as similar as possible 11 Logs Treat logs as event streams 12 Admin processes Run admin/management tasks as one off processes After reading this whitepaper you will know how to map the microservices design characteristics to twelve factor app patterns down to the design pattern to be implemented Componentization Via Services In a microservices architecture software is composed of small independent services that communicate over well defined APIs These small components are divided so that each of them does one thing and does it well while cooperat ing to deliver a full featu red application An analogy can be drawn to the Walkman portable audio cassette players that were popular in the 1980s : batteries bring power audio tapes are the medium headphones deliver output while the main tape player takes input through key presses Using them together plays music Similarly microservices need to be decoupled and each should focus on one functionality Additionally a microservices architecture allows for replacement or upgrade Using the Walkman analogy if the headphones are worn out you can replace them without replacing the tape player If an order management service in our store keeping application is falling behind and performing too slow ly you can swap it for a more performant more streamlined Amazon Web Services Running Containerized Microservices on AWS 3 component Such a permutatio n would not affect or interrupt other microservices in the system Through modularization microservices offer developers the freedom to design each feature as a black box That is microservices hide the details of their complexity from other components Any communication between services happens by using well defined APIs to prevent implicit and hidden dependencies Decoupling increases agility by removing the need for one development team to wait for another team to finish work that the first team depend s on When containers are used container images can be swapped for other container images These can be either different versions of the same image or different images altogether —as long as the functionality and boundaries are conserved Containerization is an operating system level virtualization method for deploying and running distributed applications without launching an entire virtual machine (VM) for each application Container images allow for modularity in services They are constructed by building functionality onto a base image Developers operation s teams and IT leaders should agree on base images that have the security and tooling profile that they want These images can then be shared throughout the organization as the initial building block Replacing or upgrading th ese base image s is as simple as updating the FROM field in a Dockerfile and rebuilding usually through a Continuous Integration/Continuous Delivery (CI/CD) pipeline Here are the key factors from the twelve factor app pattern methodology that play a role in componentization: • Dependencies (explicitly declare and isolate dependencies) – Dependencies are selfcontained within the container and not shared with other services • Disposability (maximize robustness with fast sta rtup and graceful shutdown) – Disposability is leveraged and satisfied by containers that are easily pulled from a repository and discarded when they stop running • Concurrency (scale out via the process model) – Concurrency consists of tasks or pods (made of containers working together ) that can be auto scaled in and out in a memory and CPU efficient manner As each business function is implemented as its own service the number of containerized services grow s Each service should have its own integration and its own deployment pipeline This increases agility Since c ontainerized services are subject to frequent deployments you need to introduce a coordination layer that that tracks which Amazon Web Services Running Containerized Microservices on AWS 4 containers are running on which hosts Eventually you will want a system that provides the state of containers the resource s available in a cluster etc Container orchestration and scheduling systems enable you to define applications by assembling a set of containers that work together You can think of the definitio n as the blueprint for your applications You can specify various parameters such as which containers to use and which repositories they belong in which ports should be opened on the container instance for the application and what data volumes should be mounted Container management systems enable you to run and maintain a specified number of instances of a container set —containers that are instantiated together and collaborate using links or volumes Amazon ECS refers to these as Tasks Kubernetes refers to them as Pods Schedulers maintain the desired count of container sets for the service Additionally the service infrastructure can be run behind a load balancer to distribute traffic acr oss the container set associated with the service Organized Around Business Capabilities Defining exactly what constitutes a microservice is very important for development teams to agree on What are its boundaries? Is an application a microservice? Is a shared library a microservice? Before microservices system architecture would be organized around technological capabilities such as user interface database and server side logic In a microservice s based approach as a best practice each development t eam owns the lifecycle of its service all the way to the customer For example a recommendations team might own development deployment production support and collection of customer feedback In a microservices driven organization small teams act auto nomously to build deploy and manage code in production This allows teams to work at their own pace to deliver features Responsibility and accountability foster a culture of ownership allowi ng teams to better align to the goals of their organization an d be more productive Microservices are as much an organizational attitude as a technological approach This principle is known as Conway’s Law : Amazon Web Services Running Containerized Microservices on AWS 5 "Organizations which design systems are constrained to produce designs which are copies of the communicatio n structures of these organizations" — M Conway3 When architecture and capabilities are organized around atomic business functions dependencies between components are loosely coupled As long as there is a communication contract between services and teams each team can run at its own speed With this approach the stack can be polyglot meaning that developers are free to use the programming languages that are optimal for their component For example the user interface can be written in JavaScript or HTML5 the backend in Java and data processing can be done in Python This means that business functions can drive development decisions Organizing around capabilities mean s that each API team owns the function d ata and performance completely The following are key factors from the twelve factor app pattern methodology that play a role in organizing around ca pabilities: • Codebase (one codebase tracked in revision control many deploys) – Each microservice owns its own codebase in a separate repository and throughout the lifecycle of the code change • Build release run (strictly separate build and run stages) – Each microservice has its own deployment pipeline and deployment frequency This enables the development teams to run microservices at varying speed s so they can be responsive to customer needs • Processes (execute the app as one or more stateless processe s) – Each microservice does one thing and does that one thing really well The micro service is designed to solve the problem at hand in the best possible manner • Admin processes (run admin/management tasks as one off processes) – Each micro service has its own admin istrative or management tasks so that it function s as designed To achieve a microservices architecture that is organized around business capabilities use popular microse rvices design patterns A design pattern is a general reusable solution to a commonly occurring problem within a giving context Amazon Web Services Running Containerized Microservices on AWS 6 Popular microservice design patterns4 5 6: • Aggregator Pattern – A basic service which invokes other services to gather t he required information or achieve the required functionality This is beneficial when you need an output by combining data from multiple microservices • API Gateway Design Pattern – API Gateway also acts as the entry point for all the microservices and cre ates fine grained APIs for different types of clients It can fan out the same request to multiple microservices and similarly aggregate the results from multiple microservices • Chained or Chain of Responsibility Pattern – Chained or Chain of Responsibility Design Patterns produces a single output which is a combination of multiple chained outputs • Asynchronous Messaging Design Pattern – In this type of microservices design pattern all the services can communicate with each other but they do not have to communicate with each other sequentially and they usually communicate asynchronously • Database or Shared Data Pattern – This design pattern will enable you to use a database per service and a shared database per service to solve various proble ms These problems can include duplication of data and inconsistency different services have different kinds of storage requirements few business transactions can query the data and with multiple services and d enormalization of dat a • Event Sourcing Des ign Pattern – This design pattern helps you to create events according to change of your application state • Command Query Responsibility Segregator (CQRS) Design Pattern – This design pattern enables you to divide the command and query Using the common CQRS pattern where t he command part will handle all the requests related to CREATE UPDATE DELETE while the query part will take care of the materialized views • Circuit Breaker Pattern – This design pattern enables you to stop the process of the request an d response when the service is not working For example when you need to redirect the request to a different service after certain number of failed request intents Amazon Web Services Running Containerized Microservices on AWS 7 • Decomposition Design Pattern – This design pattern enables you to decompose an application based on business capability or on based on the sub domains Products Not Projects Companies that have mature applications with successful software adoption and who want to maintain and expand their user base will likely be more successful if t hey focus on the experience for their customers and end users To stay healthy simplify operations and increase efficiency your e ngineering organization should treat software components as products that can be iteratively improved and that are constantl y evolving This is in contrast to the strategy of treating software as a project which is completed by a team of engineers and then handed off to an operations team that is responsible for running it When software architecture is broken into small micro services it becomes possible for each microservice to be an individual product For internal microservice s the end user of the product is another team or service For an external microservice the end user is the customer The core benefit of treating so ftware as a product is improved end user experience When your organization treats its software as an always improving product rather than a oneoff project it will produce code that is better architected for future work Rather than taking shortcuts that will cause problems in the future engineers will plan software so that they can continue to maintain it in the long run Software planned in this way is easier to operate maintain and extend Your c ustomers appreciate such dependable software because t hey can trust it Additionally when engineers are responsible for building delivering and running software they gain more visibility into how their software is performing in real world scenarios which accelerates the feedback loop This makes it easier to improve the software or fix issues The following are key factors from the twelve factor app pattern methodology that play a role in adopt ing a product mindset for delivering software: • Build release run – Engineers adopt a devops culture that allows them to optimize all three stages • Config – Engineers build better configuration management for software due to their involvement with how that software is used by the customer Amazon Web Services Running Containerized Microservices on AWS 8 • Dev/prod parity – Software treated as a product can be it eratively developed in smaller pieces that take less time to complete and deploy than long running projects which enables development and production to be closer in parity Adopting a product mindset is driven by culture and process —two factors that drive change The goal of your organization’s engineering team should be to break down any walls between the engineers who build the code and the engineers who run the code in production The following concepts are crucial: • Automat ed provisioning – Operations should be automated rather than manual This increases velocity as well as integrates engineering and operations • Selfservice – Engineers should be able to configure and provision their own dependencies This is enabled by containerized envi ronments that allow engineers to build their own container that has anything they require • Continuous Integration – Engineers should check in code frequently so that incremental improvements are available for review and testing as quickly as possible • Cont inuous Build and Delivery – The process of building code that’s been checked in and delivering it to production should be automated so that engineers can release code without manual intervention Containerized microservices help engineering organizations i mplement these best practice patterns by creating a standardized format for software delivery that allows automation to be built easily and used across a variety of different environments including local quality assurance and production Smart Endpoints and Dumb Pipes As your engineering organization transition s from building monolithic architecture s to building microservices architecture s it will need to understand how to enable communications between microservices In a monolith the various component s are all in the same process In a microservices environment components are separated by hard boundaries At scale a microservices environment will often have the various components distributed across a cluster of servers so that they are not even neces sarily collocated on the same server This means there are two primary forms of communication between services: Amazon Web Services Running Containerized Microservices on AWS 9 • Request/Response – One service explicitly invokes another service by making a request to either store data in it or retrieve data from it For e xample when a new user creates an account the user service makes a request to the billing service to pass off the billing address from the user’s profile so that that billing service can store it • Publish/Subscribe – Event based architecture where one se rvice implicitly invokes another service that was watching for an event For example when a new user creates an account the user service publishes this new user signup event and the email service that was watching for it is triggered to email the user asking them to verify their email One architectural pitfall that generally leads to issues later on is attempting to solve communication requirements by building your own complex enterprise service bus for routing messages between microservices AWS recomme nds using a message broker such as Amazon MSK Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS ) Microservices architectures favor these tools because they enable a decentralized approach in which the endpoints that produce and consume messages are smart but the pipe between the endpoints is dumb In other words concentrate the logic in the containers and refrain from leveraging (and coupling to) sophisticated buses and messaging services Network communication often plays a central role in distributed systems Service meshes strive to address this issue Here you can leverage the idea of externalizing selected functionalities Service meshes work on a sidecar pattern where you add containers to extend the behavior of existing containers Sidecar is a microservices design pattern where a companion service runs next to your pr imary microservice augmenting its abilities or intercepting resources it is utilizing AWS App Mesh a sidecar container Envoy is used as a proxy for all ingress and egress traffic to the primary microservice Using this sidecar pattern with Envoy you can create the backbone of the service mesh without impacting our applications a service mesh is comprised of a control plane and a data plane In current implemen tations of service meshes the data plane is made up of proxies sitting next to your applications or services intercepting any network traffic that is under the management of the proxies Envoy can be used as a communication bus for all traffic internal to a service oriented architecture (SOA) Sidecars can also be used to build monitoring solutions When you are running microservices using Kubernetes there are multiple observability strategies one of them is using sidecars Due to the modular nature of the sidecars you can use it for your logging and monitoring needs For e xample you can setup FluentBit or Firelens for Amazon Web Services Running Containerized Microservices on AWS 10 Amazon ECS to send logs from containers to Amazon CloudWatch Logs AWS Distro for Open Telemetry can also be used for gathering metrics and sending metrics off to other services Recently AWS has launched managed Prometheus and Grafana for the monitoring/ visualization use cases The core benefit of building smart endpoints and dumb pipes is the ability to decentralize the architecture particularly when it comes to how endpoints are maintained updated and e xtended One goal of microservices is to enable parallel work on different edges of the architecture that will not conflict with each other Building dumb pipes enables each microservice to encapsulate its own logic for formatting its outgoing responses or suppl ementing its incoming requests The following are the key factors from the twelve factor app pattern methodology that play a role in building smart endpoints and dumb pipes: • Port Binding – Services bind to a port to watch for incoming requests and send requests to the port of another service The pipe in between is just a dumb network protocol such as HTTP • Backing services – Dumb pipes allow a background microservice to be attached to another microservice in the same way that you attac h a database • Concurrency – A properly designed communication pipeline between microservices allows multiple microservices to work concurrently For example several observer microservices may respond and begin work in parallel in response to a single even t produced by another microservice Decentralized Governance As your organization grows and establishes more code driven business processes one challenge it could face is the necessity to scale the engineering team and enable it to work efficiently in par allel on a large and diverse codebase Additionally your engineering organization will want to solve problems using the best available tools Decentralized governance is an approach that works well alongside microservices to enable engineering organizati ons to tackle this challenge Traffic lights are a great example of decentralized governance City traffic lights may be timed individually or in small groups or they may react to sensors in the pavement However for the city as a whole there is no need for a primary traffic control center in order to keep cars moving Separately implemented local optimizations work together to provide a city wide Amazon Web Services Running Containerized Microservices on AWS 11 solution Decentralized governance helps remove potential bottlenecks that would prevent engineers from bein g able to develop the best code to solve business problems When a team kicks off its first greenfield project it is generally just a small team of a few people working together on a common codebase After the greenfield project has been completed the bus iness will quickly discover opportunities to expand on their first version Customer feedback generates ideas for new features to add and ways to expand the functionality of existing features During this phase engineers will start grow ing the codebase an d your organization will start divid ing the engineering organization into service focused teams Decentralized governance means that each team can use its expertise to choose the best tools to solve their specific problem Forcing all teams to use the same database or the same runtime language isn’t reasonable because the problems they ’re solving aren’t uniform However d ecentralized governance is not without boundaries It is helpful to use standards throughout the organization such as a standard build and code review process because this helps each team continue to function together Source control plays an important role in the decentralized governance Git can be used as a source of truth to operate the deployment and governance strategies For example version control history peer review and rollback can happen through Git withou t needing to use additional tools With GitOps automated delivery pipelines roll out changes to your infrastructure when changes are made by pull request to Git GitOps also makes use of tools that compares the production state of your application with what’s under source control and alerts you if your running cluster doesn’t match your desired state The following are the principles for GitOps to work in practice : 1 Your entire system described declaratively 2 A desired system state version controlled in Git 3 The ability for changes to be automatically applied 4 Software agents that verify correct system state and alert on divergence Most CI/CD tools available today use a push based model A push based pipeline means that code starts with the CI system and then continues its path through a series of encoded scripts in your CD system to push changes to the destination The reason you don’t want to use y our CI/CD system as the basis for your deployments is because of the potential to expose credentials outside of your cluster While it is possible to secure your CI /CD scripts you are still working outside the trust domain of your cluster Amazon Web Services Running Containerized Microservices on AWS 12 which is not rec ommended With a pipeline that pulls an image from the repository your cluster credentials are not exposed outside of your production environment The following are the key factors from the twelve factor app pattern methodology that play a role in enablin g decentralized governance: • Dependencies – Decentralized governance allows teams to choose their own dependencies so dependency isolation is critical to make this work properly • Build release run – Decentralized governance should allow teams with differ ent build processes to use their own toolchains yet should allow releasing and running the code to be seamless even with differing underlying build tools • Backing services – If each consumed resource is treated as if it was a third party service then de centralized governance allows the microservice resources to be refactored or developed in different ways as long as they obey an external contract for communication with other services Centralized governance was favored in the past because it was hard to efficiently deploy a polyglot application Polyglot applications need different build mechanisms for each language and an underlying infrastructure that can run multiple languages and frameworks Polyglot architectures had varying dependencies which coul d sometimes have conflicts Containers solve th ese problem s by allowing the deliverable for each individual team to be a common format: a Docker image that contains their component The contents of the container can be any type of runtime written in any l anguage However the build process will be uniform because all containers are built using the common Dockerfile format In addition all containers can be deployed the same way and launched on any instance since they carry their own runtime and dependenci es with them An engineering organization that chooses to employ decentralized governance and to use containers to ship and deploy this polyglot architecture will see that their engineering team is able to build performant code and iterate more quickly Decentralized Data Management Monolithic architectures often use a shared database which can be a single data store for the whole application or many applications This leads to complexities in changing schemas upgrades downtime and dealing with backward compatibility risks A Amazon Web Services Running Containerized Microservices on AWS 13 service based approach mandates that each service get its own data storage and doesn’t share that d ata directly with anybody else All data bound communication should be enabled via services that encompass the data As a result each service team chooses the most optimal data store type and schema for their application T he choice of the database type is the responsibility of the service teams It is an example of decentralized decision making with no central group enforcing standards apart from minimal guidance on connectivity AWS offers many fully managed storage servic es such as object store key value store file store block store or traditional database Options include Amazon S3 Amazon DynamoDB Amazon Relational Database Service (Amazon RDS ) and Amazon Elastic Block Store (Amazon EBS) Decentralized data manag ement enhances application design by allowing the best data store for the job to be used This also removes the arduous task of a shared database upgrade which could be weekends worth of downtime and work if all goes well Since each service team owns it s own data its decision making become s more independent The teams can be self composed and follow their own development paradigm A secondary benefit of decentralized data management is the disposability and fault tolerance of the stack If a particular data store is unavailable the complete application stack does not become unresponsive Instead the application goes into a degraded state losing some capabilities while still servicing requests This enables the application to be fault tolerant by desi gn The following are the key factors from the twelve factor app pattern methodology that play a role in organizing around capabilities: • Disposability (maximize robustness with fast startup and graceful shutdown ) – The services should be robust and not dep endent on externalities This principle further allows for the services to run in a limited ca pacity if one or more components fail • Backing services (treat backing services as attached resources ) – A backing service is any service that the app consumes over the network such as data stores messaging systems etc Typically backing services are managed by operations The app should make no distinction between a local and an external service • Admin pro cesses (run admin/management tasks as one off processes ) – The process es required to do the app’s regular business for example running Amazon Web Services Running Containerized Microservices on AWS 14 database migrations Admin processes should be run in a similar manner irrespective of environments To achieve a micr oservices architecture with decoupled data management the following software design patterns can be used: • Controller – Helps direct the request to the appropriate data store using the appropriate mechanism • Proxy – Helps provide a surrogate or placeholder for another object to control access to it • Visitor – Helps represent an operation to be performed on the elements of an object structure • Interpreter – Helps map a service to data store semantics • Observer – Helps define a one tomany dependency between objects so that when one object changes state all of its dependents are notified and updated automatically • Decorator – Helps attach additional responsibilities to an object dynamically Decorators provide a fl exible alternative to sub classing for extending functionality • Memento – Helps capture and externalize an object's internal state so that the object can be returned to this state later Infrastructure Automation Contemporary architectures whether monolit hic or based on microservices can greatly benefit from infrastructure level automation With the introduction of virtual machines IT teams were able to easily replicate environments and create templates of operating system states that they wanted The ho st operating system became immutable and disposable With cloud technology the idea bloomed and scale was added to the mix There is no need to predict the future when you can simply provision on demand for what you need and pay for what you use If an en vironment isn’t needed anymore you can shut down the resources On demand provisioning can be combined with spot compute7 which enables you to request unused compute capacity at steep discounts One useful mental image for infrastructure ascode is to p icture an architect’s drawing come to life Just as a blueprint with walls windows and doors can be transformed into Amazon Web Services Running Containerized Microservices on AWS 15 an actual building so load balancers databases or network equipment can be written in source code and then instantiated Microservices not only need disposable infrastructure ascode they also need to be built tested and deployed automatically Continuous integration and continuous delivery are important for monoliths but they are indispensable for microservices Each service needs i ts own pipeline one that can accommodate the various and diverse technology choices made by the team An automated infrastructure provides repeatability for quickly setting up environments These environments can each be dedicated to a single purpose: dev elopment integration user acceptance testing ( UAT) or performance testing and production Infrastructure that is described as code and then instantiated can eas ily be rolled back This drastically reduces the risk of change and in turn promotes innova tion and experiments The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design : • Codebase (one codebase tracked in revision control many deploys ) – Because the infrastructure can be described as code treat all code similarly and keep it in the service repository • Config (store config urations in the environmen t) – The environment should hold and share its own specificities • Build release run (strictly sepa rate build and run stages ) – One environment for each purpose • Disposability (maximize robustness with fast startup and graceful shutdown ) – This factor transcends the process layer and bleeds into such downstream layers as containers virtual machines and virtual private cloud • Dev/prod parity – Keep development staging and production as similar as possible Successful applications use some form of infrastructure ascode Resources such as databases container clusters and load balancers can be instant iated from description To wrap the application with a CI /CD pipeline you should choose a code repository an integration pipeline an artifact building solution and a mechanism for deploying these artifacts A microservice should do one thing and do it well This implies that when you build a full application there will potentially be a large number of services Each of these Amazon Web Services Running Containerized Microservices on AWS 16 need their own integration and deployment pipeline Keeping infrastructure automation in mind architects who face this challenge of proliferating services will be able to find common solutions and replicate pipelines that have made a particular service successful An image repository should be used in the CI/CD pipeline to push the containerized image of the microservice We have v arious popular image repositories such as Amazon ECR Redhat Quay Docker Hub JFrog Container registries can be used as part of the infrastructure automation As previously described in the Decentralized Gover nance section GitOps is a popular operational framework for achieving Continuous Delivery Git is used as single source of truth for deploying into your cluster Tools such as Flux runs in your cluster and implements changes based on monitoring Git and image repositories Flux keeps an eye on image repositories detects new images and updates the running configurations based on a configurable policy Continuous Delivery (CD) tools such as ArgoCD Spinnaker can also be leveraged for immediate autonomous deployment to production environments Ultimately the goal is to enable developers to push code updates to container image repositories and have the updated container images of the application sent to multiple environments in minutes There are many ways to successfully deploy in phases including the blue/green and canary methods With the blue/green deployment two environments live side by side with one of them running a newer version of the application Traffic is sent to the older version until a swi tch happens that route s all traffic to the new environment You can see an example of this happening in this reference architecture Blue/green deployment Amazon Web Services Running Containerized Microservices on AWS 17 In this case we use a switch of target groups behind a load balancer in order to redirect traffic from the old to the new resources Another way to achieve this is to use services fronted by two load balancers and operate the switch at the DNS level Design for Failure “Everything fails all the time” – Werner Vogels This adage is not any less true in the container world than it is for the cloud Achieving high availability is a top priority for workloads but remains an arduous undertaking for development teams Modern applications running in containers should not be tasked with managing the underlying layers from physical infrastructure like electricity sources or environmental controls all the way to the stability of the underlying operating system If a set of contai ners fails while tasked with deliver ing a service these containers should be re instantiated automatically and with no delay Similarly as microservices interact with each other over the network more than they do locally and synchronously connections ne ed to be monitored and managed Latency and timeouts should be assumed and gracefully handled More generally microservices need to apply the same error retries and exponential backoff with jitter as advised with applications running in a networked environment8 Designing for failure also means testing the design and watching services cope with deteriorating conditions Not all technology departments need to apply th is principle to the extent that Netflix does9 10 but we encourage you to test these mechanisms often Designing for failure yields a self healing infrastructure that acts with the maturity that is expected of recent workloads Preventing emergency calls guarantees a base level of satisfaction for the service owning team This also removes a level of stress that can otherwise grow into accelerated attrition Designing for failure will deliver greater uptime for your products It can shield a company from outages that could erode customer trust Here are the key factors from the twelve factor app pattern methodology that play a role in designing for failure: • Disposabilit y (maximize robustness with fast startup and graceful shutdown ) – Produce lean container images and striv e for processes that can start and stop in a matter of seconds Amazon Web Services Running Containerized Microservices on AWS 18 • Logs (treat logs as event streams ) – If part of a system fail s troubleshooting is nece ssary Ensure that material for forensics exists • Dev/prod parity – Keep development staging and production as similar as possible AWS recomme nds that container hosts be part of a self healing group Ideally container management systems are aware of di fferent data centers and the microservices that span across them mitigating possibl e events at the physical level Containers offer an abstraction from operating system management You can treat container instances as immutable servers Containers will behave identically on a developer’s laptop or on a fleet of virtual machines in the cloud One very useful container pattern for hardening an application’s resiliency is the circuit break er With circuit breakers such as Resilience4j Hystrix an application container is proxied by a container in charge of monitoring connection attempts from the application container If connections are successful the circuit breaker container remains in closed status letting communication happen When connections start failing the circuit breaker logic triggers If a pre defined threshold for failure/success ratio is breached the container enters an open status that prevents more connections This mech anism offers a predictable and clean breaking point a departure from partially failing situations that can render recovery difficult The application container can move on and switch to a backup service or enter a degraded state One other useful containe r pattern for application’s resilience is the using Service Mesh which forms a network of microservices communicating with each other Tools such as AWS App Mesh Istio have been available recently to manage and monitor such service meshes Services meshe s have sidecars which refers to a separate process that is installed along with the service in a container set Important feature of the sidecar is that all communication to and from the service is routed through the sidecar process This redirection of co mmunication is completely transparent to the service This service meshes offer several resilience patterns which can be activated by rules in the sidecar and these are Timeout Retry and Circuit Breaker Modern container management services allow develo pers to retrieve near real time event driven updates on the state of containers Docker supports multiple logging drivers (list as of Docker v 2010 ): 11 12 Amazon Web Services Running Containerized Microservices on AWS 19 Driver Description none No logs will be available for the container and Docker logs will not return any output jsonfile The logs are formatted as JSON The default logging driver for Docker syslog Writes logging messages to the syslog facility The syslog daemon must be running on the host machine journald Writes log messag es to journal d The journald daemon must be running on the host machine gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash fluentd Writes log messages to fluentd (forward input) The fluentd daemon must be running on the host machine awslogs Writes log messages to Amazon CloudWatch Logs splunk Writes log messages to splunk using the HTTP Event Collector etwlogs Writes log messages as Event Tracing for Windows (ETW) events Only available on Windows platforms gcplogs Writes log messages to Google Cloud Platform (GCP) Logging local Logs are stored in a custom format designed for minimal overhead logentries Writes log messages to Rapid7 Logentries Sending these log s to the appropriate destination becomes as easy as specifying it in a key/value manner You can then define appropriate metrics and alarms in your monitoring solution Another way to collect telemetry and troubleshooting material from containers is to link a logging container to the application container in a pattern generically referred to as sidecar More specifically in the case of a container working to standardize and normalize the output the pattern is known as an adapter Contain er monitoring is another approach for tracking the operation of a containerized application These system s collect metrics to ensure application running on containers are performing properly Container monitoring solutions use metric capture analytics Amazon Web Services Running Containerized Microservices on AWS 20 transaction tracing and visualization Container monitoring covers basic metrics like memory utilization CPU usage CPU limit and memory limit Container monitoring also offers the real time streaming logs tracing and observability that containers need Containers can also be leveraged to ensure that various environments are as similar as possible Infrastructure ascode can be used to turn infrastructure into templates and easily replicate one footprint Evolutionary Design In modern systems architecture design you need to assume that you don’t have all the requirements up front As a result having a detailed design phase at the beginning of a project becomes impractical The services have to evolve through various iteratio ns of the software As services are consumed there are learnings from real world usage that help ev olve their functionality An example of this could be a silent inplace software update on a device While the feature is rolled out an alpha /beta testing strategy can be used to understand the behavior in real time The feature can be then rolled out more broadly or rolled back and worked on using the feedback gained Using deployment techniques such as a canary release a new feature can be tested in an accelerated fashion against it s target audience This provid es early fe edback to the development team As a result of the evolutionary design principle a service team can build the minimum viable set of features needed to stand up the stack and roll it ou t to users The development team doesn’t need to cover edge cases to roll out features Instead the team can focus on the needed pieces and evolve the design as customer feedback comes in At a later stage the team can decide to refactor after they feel confident that they have enough feedback Conducting periodical product workshops also helps in evolution of product design The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design: • Codebase (one codebase tracked in revision control many deploys ) – Helps evolve features faster since new feedback can be quickly incorporated • Dependencies (explicitly declare and isolate dependencies ) – Enables quick iterations of the design since features are t ightly coupled with externalities Amazon Web Services Running Containerized Microservices on AWS 21 • Configuration (store configurations in the environment ) – Everything that is likely to vary between deploys (staging production developer environments etc) Config varies substantially across deploys code does not With configurations stored outside code the design can evolve irrespective of the environment • Build release run (strictly separate build and run stages ) – Help roll out new features using various deployment techniques Each release has a specific ID and can be used to gain design efficiency and user feedback The following software design patterns can be used to achieve an evolutionary design : • Sidecar extend s and enhance s the main service • Ambassador creates helper services that send network requests on behalf of a consumer service or application • Chain provides a defined order of starting and stopping containers • Proxy provide s a surrogate or placeholder for another object to control access to it • Strategy defines a family of algorithms encapsulate s each one and make s them interchangeable Strategy lets the algorithm vary independently from the clients that use it • Iterator provides a way to access the elements of an aggregate object sequentially wi thout exposing its underlying representation • Service Mesh is a dedicated infrastructure layer for facilitating service toservice communications between microservices using a proxy Containers provide additional tools to evolve design at a faster rate wi th image layers As the design evolves each image layer can be added keeping the integrity of the layers unaffected Using Docker an image layer is a change to an image or an intermediate image Every command (FROM RUN COPY etc) in the Dockerfile causes the previous image to change thus creating a new layer Docker will build only the layer that was changed and the ones after that This is called layer caching Using layer caching deployment times can be reduced Deployment strategies such as a Canary release provide added agility to evolve design based on user feedback Canary release is a technique that’s used to reduce the risk inherent in a new software version release In a canary release the new software is Amazon Web Services Running Containerized Microservices on AWS 22 slowly rolled out to a small subset of users before it’s rolled out to the entire infrastructure and made available to everybody In the diagram that follows a canary release can easily be implemented with containers using AWS primitives As a container announces its health via a health check API the canary directs more traffic to it The state of the canary and the execution is maintained using Amazon DynamoDB Amazon Route 53 Amazon CloudWatch Amazon Elastic Container Service (Amazon ECS) and AWS Step Functions Canary deployment with containers Finally usage monitoring mechanisms ensure that development teams can evolve the design as the usage patterns change with variables Conclusion Microservices can be designed using the twelve factor app pattern methodology an d software design patterns enable you to achieve this easily These software design patterns are well known If applied in the right context they can enable the design benefits of microservices AWS provides a wide range of primitives that can be used to enab le containerized microservices Amazon Web Services Running Containerized Microservices on AWS 23 Contributors The following individuals contributed to this document: • Asif Khan Technical Business Development Manager Amazon Web Services • Pierre Steckmeyer Solutions Architect Amazon Web Service • Nathan Peck Developer Advocate Amazon Web Services • Elamaran Shanmugam Cloud Architect Amazon Web Services • Suraj Muraleedharan Senior DevOps Consultant Amazon Web Services • Luis Arcega Technical Account M anager Amazon Web Services Document Revisions Date Descript ion August 5 2021 Whitepaper updated with latest technical content November 1 2017 First publication Notes 1 https://docsmicrosoftcom/en us/dotnet/architecture/microservices/architect microserv icecontainer applications/communication inmicroservice architecture 2 https://martinfowlercom/articles/microserviceshtml 3 https://enwikipediaorg/wiki/Conway's_law 4 https://microservicesio/patterns/microserviceshtml 5 https://d zonecom/articles/design patterns formicroservices 6 https://docsawsamazoncom/prescriptive guidance/latest/modernization integrating microservices/welcomehtml Amazon Web Services Running Containerized Microservices on AWS 24 7 https://awsamazoncom/blogs/containers/running airflow workflow jobsonamazon eksspotnodes/ 8 https://docsawsamazoncom/general/latest/gr/api retrieshtml 9 https://githubcom/netflix/chaosmonkey 10 https://githubcom/Netfl ix/SimianArmy 11 https://docsdockercom/engine/admin/logging/overview/ 12 https://wwweksworkshopcom/intermediate/230_logging/
|
General
|
consultant
|
Best Practices
|
Running_Neo4j_Graph_Databases_on_AWS
|
ArchivedRunning Neo4j Graph Databases on AWS May 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in th is document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuran ces from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Transacting with the Graph 1 Deployment Patterns for Neo4j on AWS 2 Basics 3 Networking 3 Clustering 5 Database Storage Considerations 10 Operations 15 Disaster Rec overy 19 Conclusion 20 Contributors 20 Further Reading 21 Notes 21 ArchivedAbstract Amazon Web Services (AWS) is a flexibl e cost effective and easy touse cloud computing platform Neo4j is the leading NoSQL graph database that is widely deployed in the AWS C loud Running your own Neo4j deployment on Amazon Elastic Compute Cloud (Amazon EC2) is a great solution for users whose applications require high performance operations on large datasets This whitepaper provides an overview of Neo4j and its implementation on the AWS Cloud It also discusses best practices and implementation characteristics such as performance durabi lity cost optimization and security ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 1 Introduction NoSQL refers to a subset of structured storage software that is optimized for high performance operations on large datasets As the name implies querying of these systems is not base d on the SQL language —instead each product provides its own interface for accessing the system and its features A common way to understand the spectrum of NoSQL databases is by looking at their underlying data models: • Column stores – Data is organized into columns and column families providing a nested hashmap like structure accessed by key • Keyvalue stores – Data is organized as key value relationships and accessed by primary key • Document databases – Data is organized as documents (eg JSON XML) and accessed by fields within the document Neo4j provides a far richer data model than other NoSQL databases Instead of working with isolated values columns or documents Neo4j support s relationships between data so that webs of interconnected data can be created and queried We see this kind of data every day in use cases from social networks to transport road and rail networks G raph databases are already widely applied in fields as diverse as healthcare finance education IT infrastructure identity management Internet of Things (IoT) and many more In this whitepaper we'll discuss how to run Neo4j effectively on AWS T he on demand nature of Amazon Elastic Compute Cloud ( Amazon EC2 ) and the power of Neo4j together provide a great way for you to deploy graph data to support your use case while avoiding the undifferentiated heavy lifting typically associated with purchasing deploying and managing traditional infrastructure Transacting with the Graph To make traversing the graph efficient and safe from physical and semantic corruption the graph model demands strong consistency with its underlying storage That is if a relationship exists between two nodes it must be reachable from both of them ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 2 Graph Consistency If the records in a graph database disagree about connectivity a non deterministic structure will result Traversing the graph in one direction leads to different actions being taken than if the graph were traversed in the other direction This in turn leads to different decisions being recorded in the graph which may lead to semantic corruption spreading throughout the graph compounding the initial physical corruption In order to preserve the rigorous consistency required for graphs Neo4j uses atomi c consistent isolated and durable ( ACID ) transactions when modifying the graph In the case of read only transactions the cost of transactions is minimal because read locks do not block other reads and there is no need to flush to disk To ensure safe recoverable write transactions the system will take write locks which will block reads and flush data to the transaction log before completing the transaction To reduce the impact of a physical flush Neo4j amortize s the cost of flushing across multiple small concurrent transactions This means thousands of ACID transactions per second can be processed in a well tuned system while preserving safety With Amazon EC2 there are multiple instance types that feature high performance solid state drives ( SSDs ) that vastly reduce the cost of writing to disk Therefore it’s possible to tune for a greater number of transactions per second based on high performance block storage Such performance is available in the I2 instance family which is designed to perform up to 300000 input/output operations per second ( IOPS) Deployment Patterns for Neo4j on AWS Neo4j differs from other database management system ( DBMS ) engines in that it can either be deployed as a traditional database server or embedded within an applic ation This bimodal operation provides the same APIs the same transactional guarantees and the same level of cluster support either way In the follow ing sections we describe the deployment model of a traditional database server that is deployed on Amazon EC2 ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 3 Basics Neo4j was originally conceived as an embedded Java library intended to provide idiomatic access to connected data through a graph API While Neo4j retains the ability to be embedded in JVM based applications it has grown in sophistication since those days adding an excellent query language practical programmatic APIs and support for h igh availability (HA) via clusters of Neo4j instances That functionality can be invoked over the network from any platform despite the 4j naming! Neo4j is a transactional database that supports high concurrency while ensuring that concurrent transactions do not interfere with each other Even deadlocking transactions are automatically detected and rolled back When data is written to Neo4j it’s guaranteed du rable on disk In the event of a fault no partially written records will exist after restart and recovery A single instance of the database is resilient right up to the point where the disk is lost To protect against the failure of a disk Neo4j has a n HA mode in which multiple instances of Neo4j can collaborate to store and query the same graph data The loss of any individual Neo4j instance can be tolerated since others will remain available In fact work proceed s as usual when a majority of the Neo4j cluster is available Neo4j is able to capitalize on the robust features that AWS offers not only to detect failures but also to provide automated recovery mechanisms Networking Neo4j HA trusts the network and so it’s important to physically secure it a gainst intrusion and tampering Conversely f or application database interactions Neo4j supports transport level security (TLS) out of the box for privacy and integrity AWS offers a high performance networking environment in a customer controlled VPC created with Amazon Virtual Private Cloud (VPC) Within your VPC you can create and manage the logical network components that you need to deploy your application infrastructure The VPC enables you to create your own network address space subnets f irewall rules route tables as well as extend connectivity to your own data centers and the Internet ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 4 The network design for a Neo4j cluster can be easily customized to the specific application on AWS Most customers choose to keep the database on a private subnet that has strict network controls in place to prevent unauthorized network access There are two different types of firewalls built into the AWS Cloud that provide a high level of network isolation The first type is a security group which is a stateful firewall that is applied at the instance level in both the inbound and outbound directions The security group defines which protocols ports and Classless Inter Domain Routing ( CIDR) IP address ranges have access to a specific instance Security groups have an implicit deny which means that there is no network access by default To be granted network access a security group must specific ally allow the traffic through Deploying your Neo4j cluster into a VPC with a private subnet and configuring your security group to permit ingress over the appropriate TCP ports builds another layer of network security The following table shows default TCP port numbers for Neo4j : Port Process 7474 The Neo4j REST API and web frontend are available at this port 7687 The binary protocol endpoint Used by application drivers to query and transact with the database The second type of firewall is a Network Access Control List ( NACL ) A NACL is defined at the subnet level and is a stateless firewall A NACL is an ordered set of rules that is evaluated with the lowest number rule first B y default the NACL has an explicit rule to allow all traffic to flow in both directions on the subn et However NACL rules can also be applied to allow traffic to flow in either t he inbound or outbound direction s as well Every Amazon EC2 instance is allocated bandwidth that corresponds to it s size which currently in the X1 instance family can be up to 20 Gbps of network bandwidth As instance size decreases so does the bandwidth allocated to the instance If you r application requires a high level of network communication between hosts ensure that the instance size selected will deliver the bandwidth ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 5 needed In Neo4j this t ypical ly corresponds to systems that sustain high write loads VPC Design Figure 1: Sample VPC design To optimize network performance we suggest using EC2 instances that support enhanced networking which uses single root I/O virtualization (SR IOV) to ensure that your instances can achieve greater packets per second reduced latency and reduced jitter AWS recommends that you use a multi Availability Zone ( AZ)1 design for your applications in order to achieve a high level of fault tolerance By using multiple Availability Zones you can mitigate the risk of an entire Availability Zone failing by replicating to another instance in a separate Availability Zone With Neo4j the network latency between instances will increase because the Availability Zones are in separate physical locations Clustering Neo4j HA is available both to server based and embedded instances of Neo4j as part of Neo4j Enterprise Edition 2 The clustering architecture has been designed with two features in mind: • Optimized for graph workloads • Simple to understand and operate ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 6 A beneficial side effect of the high availability of Neo4j Enterprise Edition is that it can scale horizontally for graph operations while scaling vertically for transaction processing This topology is favorable for graph workloads since graphs are intrinsically read heavy (even when writing to a graph it must first be traversed to find the right part of the structure to update) On that basis Neo4j has opted for a clustering system similar to that found in mature relational databases in which the cluster members can have either a master or slave role A Neo4j HA cluster operates cooperatively because each database in stance contains the logic it needs to coordinate with the other members of the cluster On startup a Neo4j HA database instance tr ies to connect to an existing cluster specified by configuration If the cluster exists the instance join s it as a slave Otherwise the cluster will be created and the instance will become the current master Note that the master role is transitory A master is elected via an instance of t he Paxos algorithm embedded in Neo4j Any machine can instigate an election if it thinks it has detected a fault but a majority of the machines in the cluster must participate in the election After the election is complete one master remains or becomes elected and all other machines in the cluster become slaves Whenever a Neo4j instance becomes unavailable the other database instances in the cluster detect that and mark it as temporarily failed A database instance that becomes available after being unavailable will automatically catch up with the latest cluster updates ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 7 Figure 2: Neo4j HA clusters showing current master If the master fails another (best suited) member will be elected and have its role switched from slave to master after a quorum has been reached within the cluster When the role switch has been performed the new master will broadcast its availability to all the other cluster members A new master is typically elected and started within just a few seconds ; during this time no writes can take place Be aware that during the transition period if an old master had changes that did not get replicated to any other member before becoming unavailable and if a new master is elected and performs changes before the old master recovers there will be two "branches" of the database The old master will move away its database (its "branch") and download a full copy from the new master to become available as a slave in the cluster An operator can then choose to replay the transactions in the branched data to the cluster Neo4j High Availability In the Neo4j HA architecture the cluster is typically fronted by load balancers provided by Elastic Load Balancing or HAProxy Elastic Load Balanc ing (ELB) is an AWS service that offers load balancer s that automatically distribute traffic across multiple EC2 instances and across multiple Availability Zones An ELB load balancer is elastic because it automatically scales its request handling capacity to support network traffic and doesn’t cap the number of connections that it can establish with EC2 instances If an inst ance fails the load balancer automatically reroutes the traffic to the remaining EC2 instances that are ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 8 running If the failed EC2 instance is restored the load balancer restore s the traffic to that instance Data Integrity Integrity is crucial for a transactional database The master role imposes a total ordering of transactions on the system This has the beneficial side effect that all replicas in the cluster apply transactions in exactly the same order and therefore are kept identical Elastic Load Balancing controls and distributes traffic to your EC2 instances and serves as a first line of defense to mitigate network attacks You can offload the work of encryption and decryption to your load balancer so that your EC2 instances can focus on their main work Elastic Load Balancing has configurable health checks that can be used in conjunction with Amazon Cloud Watch to send alerts and take action when specified thresholds are reached If Auto Scaling is used with Elastic Load Balancing instances that are launched by Auto Scaling are automatically registered with the load balancer and instances that are terminated by Auto Scaling are automatically de registered from the load balancer An ELB load balancer can be Internet facing or internal and can accept HTTP HTTPS SSL and TCP connections with the ability to terminate SSL to offload the burden on the backend EC2 instances Elastic Load Balancing can also bring an extra level of security to your network design because the security groups applied t o the Neo4j servers can be configured to only accept traffic from the load balancer which m ight help prevent unauthorized access to the instances To maintain static connection points to the master database server and to the read replicas from the application it is suggested that you use two separate load balancers for this By doing this your application will not need to be updated when a new master database is elected a new slave is added or a failure on one of the nodes occurs Neo4j advertises separate REST endpoints for both the master node and the slave nodes so that the load balancers can determine what role each instance in a cluster plays By creating two load balancers and adding all of the Neo4j instances to both load balancers we can ensure that during an election the master node load balancer will properly redirec t requests to the proper nodes ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 9 Figure 3: Neo4j cluster REST endpoints fo r the master node and the slave nodes Master Node E lastic Load Balancer The master node will respond with a 200 status code with a body text of “true” and the slaves will return a 404 Not Found with a body text of “false” when the load balancer health check references /db/manage/server/ha/master "HealthCheck": { "HealthyThreshold": 2 "Interval": 10 "Target": "HTTP:7474/db/manage/server/ha/master" "Timeout": 5 "UnhealthyThreshold": 2 } Slave Node E lastic Load Balancer The master node will respond with a 404 status code with a body text of “false” and the slaves will return a 200 status code with a body text of “true” when the load balancer health check references /db/manage/server/ha/slave ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 10 "HealthCheck": { "HealthyThreshold": 2 "Interval": 10 "Target": "HTTP:7474/db/manage/server/ha/slave" "Timeout": 5 "UnhealthyThreshold": 2 } This functionality allows both the master and slave load balancers to respond to Neo4j cluster events without any application changes or administrator involvement Database Storage Considerations AWS provides two fundamental kinds of storage: Amazon Elastic Block Store (EBS) and EC2 ephemeral instance store Several EC2 instance types expose multiple ephemeral instance stores that can be used for mirroring data for fault tolerance However if the instance stops fails or is terminated all data will be lost and so strategies need to be in place to address those risks H ighspeed ephemeral storage is beneficial to graph databases that are larger than the physical memory l imit of a single EC2 instance In the case that the database is larger than the main memory specific considerations need to be taken since it will not be possible for a single Neo4j instance to cache the whole database in RAM This means that the portions of the graph that are not frequently accessed will have to run out of m emory and on a storage device The preference would be to maintain extremely rapid in memory traversals of the graph for the whole graph no matter its size • Amazon EC2 X1 Instance s – X1 instances have the lowest price per GB of RAM and are ideally suited for in memory databases With up to 1952 GB of DDR based memory 128 vCPUs and 3 840 GB of SSD storage the X1 instance is the most performant for the largest Neo4j use cases • Amazon EC2 I2 Instance s – High I/O (I2) instances are optimized to deliver more than 300000 lowlatency IOPS to applications by utilizing up to 8 SSD drives to minimize access time with a capacity of up to 6400 GB ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 11 • Amazon EC2 D2 Instance s – Dense storage (D2) instances provide an array of up to 24 internal drives with a capacity of 2 TB each These disks can be configured with multiple RAID types and partition sizes as needed A D2 instance can provide up to 35 Gbps read and 31 Gbps write disk throughpu t with a 2 MB block size and a capacity of 48 TB Neo4j is a shared nothing architecture and can therefore happily consume instance based storage Inevitable data loss when instances are stopped or terminated can be prevented by clustering We mostly focus on instance storage here but other uses exist for Neo4j on Amazon EBS that we explore at the end of this section EC2 instances can also use Amazon EBS which provides persistent block level storage volumes Amazon EBS volumes are highly available and re liable storage volumes that can be attached to any running instance that is in the same Availability Zone EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance Besides being persistent data stores you can create point intime snapshots of EBS volumes which are persisted to Amazon Simple Storage Service ( S3) Snapshots protect data for long term durability and they can be used as the starting point for new EBS volumes Th e same snapshot can be used to instantiate as many volumes as you want These snapshots can be copied across A WS Regions In a large database bringing up a cluster from scratch can take time to transfer all of the data between existing and new Neo4j insta nces Amazon EBS provides the ability to mount the data store files from a snapshot of another instance and then recover the Neo4j instance atop that store file before it rejoins the cluster This reduces the overall time that it takes to bring a new Neo4j instance into the cluster Storage Scaling Now that you have learned the fundamentals of clustering Neo4j let’s look at how the platform can be used to scale out the database For scaling Neo4j you need to consider the performance of the database under load and the physical volume of the graph being stored These two concerns are almost but not entirely orthogonal There are subtle interplays between operational load and low latency data access as you scale both ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 12 Scaling for Volume Let’s start wit h understanding scaling for volume since scaling for performance arise s naturally from there In the Neo4j world large datasets are those that are substantially larger than main memory This presents an interesting challenge for performance engineering s ince RAM provides the best balance of performance and size for database operations ( that is CPU cache is tiny but faster and disks are larger but slower) Databases love RAM and Neo4j is no exception to that rule —the more RAM available to the database the lower the possibility that it runs at disk speed rather than at memory speed The majority of this memory can be used by the database in particular consumed by Neo4j 's page cache Data Consistency Neo4j is an ACID transactional database Any abrupt shutdown of an instance such as when an EC2 instance unexpectedly dies will leave the database files in an inconsistent but repairable state Hence when booting the new Neo4j instance using files on the existing EBS volume the database will first have to recover to a consistent state before joining the cluster This process may be much quicker than a full sync from scratch Although scaling vertically is always an option thanks to rapid growth in affordable large memory machines in the AWS ecosystem and the ease of switching from one instance type to another scaling horizontally offers its own advantages Neo4j uses an HA cluster wit h a pattern called "Cache Sharding" to maintain high performance traversals with a dataset that substantially exceeds main memory space Cache sharding isn’t sharding in the traditional sense since we expect a full data set to be present on each database instance for impeccable fault tolerance and to maintain excellent performance when the memory todisk ratio is lopsided But cache sharding allows Neo4j to aggregate the RAM of individual instances by consistently routing like queries to the same database endpoint In the typical case where a server supports multiple concurrent clients access patterns tend to be noisy at first glance approximating all of the random walks of the graph overall Yet even at large scale randomness is never truly dominant ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 13 After all since the graph is structure d it makes sense that queries would be structured too With multiple concurrent clients it’s possible to discern commonality between them Whether by geograph y username or other ap plication specific feature it’s almost always possible to discern a coarse feature of the access pattern on the wire so that like requests can be consistently routed to the same server instance The solution architecture for this setup is shown in Figure 4 The technique of consistent routi ng has been implemented by high volume web properties for a long time and it is simple to implement scales well and is very robust The strategy we use to implement consistent routing will typically vary by domain Sometimes it’s fine just to use sessio n affinity (commonly called “sticky sessions”) implemented by the Elastic Load Balanc ing At other times we’ll want to route based on the characteristics of the data set A simple strategy is that the instance that first serves requests for a particular user will serve subsequent requests By doing this there is a greater chance that a warm cache will process the requests Other domain specific approaches will also work For example in a geographical data system we can route requests about particular locations to specific database instances that will be warm for that location ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 14 Figure 4: Solution architecture for consistent routing Either way we’re increasing the likelihood of the required graph data already being cached in RAM which mak es traversals extremely performant Adding high performance block storage to the mix means that even where the cache is cold (eg when a machine is restarted or a new part of the graph is being travers ed) the cache miss penalty is minimized Scaling for Performance Now that you have seen how Neo4j scales for volume scaling for performance is simplified by adding more instances Provide d that you can identify suitable workloads that do not decrease cach e performance you can simply add more instances to the Neo4j cluster to support more graph operations at an approximately linear rate Cache Sharding Assuming a uniform distribution usernames work well with this scheme and sticky sessions with round robin load balancing work in almost all domains In practice to get the best performance your choice of routing key for cache sharding must be able to become finer grained as the number of servers grows For example you could chang e the routing based on the names of countries beginning AG HN and then OZ ultimately to a separate Neo4j database ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 15 instance for each group By designing the database this way it’s possible to gain more throughput by adding more machines and using each of those machi nes as efficiently as possible Operations Operating a Neo4j cluster at scale is similar to running other database servers on Amazon EC2 Like all databases Neo4j uses working files such as logs as it executes Logs can be ke pt for troubleshooting purposes However we recommend limiting them in size or temporal scope The transaction log is important because this is where Neo4j transactions are made durable bef ore being applied to the data model This log particular ly important in the backup process Although you can keep the logical logs forever (and therefore rebuild your database from scratch merely by replaying all the transactions in those log files) in practice this would require a lot of storage for a database that has run in production for a reasonable amount of time In practice logs are maintained on a schedule that is suitable for troubleshooting and that takes the incremental backup schedule into consideration ( We discuss incremental backups in more detail later ) Monitoring AWS offers a monitoring service called Amazon CloudWatch which provides a reliable scalab le and flexible monitoring solution for EC2 instances and AWS services CloudWatch enables near real time monitoring on multiple EC2 metrics as well as the ability to monitor customer supplied metrics With CloudWatch alarms and notifications can be triggered based on events which can quickly alert you to issues and can apply automation to resolve the issues Additionally Amazon CloudWatch Logs provides the ability to collect store monitor and troubleshoot application level issues CloudWatch Logs can greatly simplify aggregating the system and application logs from all of the nodes in the Neo4j cluster CloudWatch Logs is agent based and enables every EC2 instance in the cluster to perform comprehensive logging Configuring CloudWa tch with Neo4j CloudWatch can be configu red on an existing EC2 instance 3 or on a new EC2 instance 4 Once installed the /etc/awslogs/awslogsconf file is configured to ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 16 monitor the Neo4j log At the bottom of the awslogsconf file the following section w ill be added: [Neo4j log] datetime_format = %Y %m%d %H:%M:%S%f%z file = /home/ec2 user/Neo4j 3/Neo4j enterprise 300 RC1/logs/Neo4j log log_stream_name = {instance_id} initial_position = start_of_file log_group_name = /Neo4j /logs After the awslogs service is started check the /var/log/awslogslog for any errors Configuring metrics and alerts for Neo4j is addressed in this Neo4j knowledge base article 5 Online Backup Neo4j can be backed up while it continues to serve user traffic (called “online” backup) Neo4j offers two backup options: full or incremental These strategies can be combined to provide the best mix of safety and efficiency Depending on the risk profile of the system a typical strategy m ight be to have daily full backups and hourly incremental backups or weekly full backups with daily incremental backup s As the name suggests a full backup will clone an entire database The se are the characteristics of a full backup: • Copies database store files • Does not take locks • Replays transactions run after backup started until end of store file copy At the end of a full backup there is a consistent database image on disk This backup file can be safely stored away and recovering to this backup is as simple as co pying the database files back into the Neo4j data directory (typically <Neo4j home>/data/graphdb) ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 17 After the backup has been created the recommendation is for the backup to be copied from the EC2 instance that ran the process into stable long term storage Amazon S3 provides a range of suitable archive storage platforms depending on your needs The backup can be copied to Amazon S3 directly or you can achieve the same level of durability by using an EBS snapshot which is stored in Amazo n S3 automatically Amazon EBS is a network shared storage service that can be mounted from any EC2 instance Amazon EBS provides persistent block level storage volumes that are automatically replicated within their Availability Zones to protect from compo nent failure offering high availability and durability A snapshot can be created from an EBS volume which not only provides the ability to restore data in the future but also provides the ability to mount that volume to another EC2 instance This process can greatly decrease the time that it takes to add an additional Neo4j node to the cluster A side benefit of EBS snapshots is that they are persisted to Amazon S3 which means that they are protected for long term durability Volumes can be created fro m snapshots in any Availability Zone in the Region and snapshots can also be copied across Regions to provide an even greater level of durability Amazon S3 provides three tiers of storage optimized for cost versus frequency of access Amazon also provides lifecycle policies that can automatically transition objects from Amazon S3 Standard to Amazon S3 Infrequent Access and AWS Glacier (for long term archive) after a specific amount of time has elapsed Lifecycle policies streamline the archival and cost saving process so that you don’t have to manually transition objects or pay increased storage fees for cold data In addition to simplifying storage maintenance Amazon S3 also supports versioning which can help organize redundant backups based on timestamp • Standard Amazon S3 Standard offers high durability availability and performance object storage for frequently accessed data Because it delivers low latency and high throughput Standard is perfect for a wide variety of use cases • Infrequent Access Infrequent Access (Standard IA) is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed It offers high durability throughput and low latency like Amazon S3 Standard with a low per GB storage price and per GB retrieval fee This combination of low cost and high performance ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 18 make s it a sensible option for backups and as a data store for disaster recovery • Archive AWS Glacier is a lowcost long term storage service that provides secure durable storage intended for data backup and archival AWS Glacier provides reliable long term storage for your data and eliminates the administrative burdens of operating and scaling storage to AWS Using AWS Glacier Neo4j backup operators never have to worry about capacity planning hardware provisioning data replication hardware failure detection and repair or time consuming hardware migrations Long term storage on AWS Glacier is the least expensive storage tier per GB However the SLA fo r retrieving data has a much longer latency and is typically in the 3 to 5hour range whereas the other storage tiers have a shorter retrieval time measured in milliseconds By default a consistency check is run at the end of each full backup to ensure that the files being moved to long term storage will be usable up on recovery The consistency checker is a relatively intensive operation since it makes a thorough check of the graph structure at the individual record level So run ning the consistency chec ker on the same EC2 instances as your production cluster will result in performance degradation For this reason it is advisable to run this process on another instance It favors EC2 instances with high I /O capacity and large RAM such as the i28xlarge instance However this instance doesn’t need to be continuously active— it needs only to be instantiated for the duration of the backup and consistency check Any failure during backup (such as the unscheduled termination of the underlying EC2 instance) mea ns that the backup must be repeated After you have a full backup you can then take incremental backups against that state An incremental backup is performed whenever an existing backup directory is specified which the backup tool will automatically dete ct The backup tool will then copy any new transactions from the Neo4j instance and apply them to the backup The result will be an updated backup that is consistent with the current server state and specifically is one that : • Requires a full backup be completed first • Replays logs of transactions since last backup ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 19 Restoring from a backup is very easy This is an important operational affordance since restoring is typically done when a catastrophe has occurred To restore you s imply do the following task s: 1 Make sure Neo4j is not running 2 Replace the <Neo4j home>/graphdb directory with the contents of the backup 3 Start Neo4j (If clustered start the first instance and then rolling start the remaining instances ) Now that you have seen the nuts and bolts of a Neo4j backup on AWS you can focus on having the appropriate backup hygiene by following these recommendations : • Take regular periodic full backups with a n I/O and RAM optimized EC2 instance Repeat these backups if they fail Move the backup to Amazo n S3 (or to Amazon EBS) • Take incremental backups several times a day ensuring the Neo4j log files are kept for longer than this period Ensure that the backups are transmitted to Amazon S3 (or to Amazon EBS) Disaster Recovery After you have a Neo4j backup on stable long term storage disaster recovery (DR) is greatly simplified If an incident occurs that for whatever reason wipes out all your active Neo4j instances and irrevocably wipes all instance storage then you must quickly work to restore s ervice Fortunately DR with Neo4j on AWS is straightforward You can place backups in long term stable storage and restore them by a simple file copy in the event of a disaster From there you can seed a new cluster of Neo4j instances and resume service Any transactions that occurred between the backup and disaster will have been lost Neo4j clusters can easily span multiple Availability Zones within the same VPC to create private logically isolated networks We recommend that you use a design for deploy ing Neo4j on multiple Availability Zones ELB load balancers can operate across multiple Availability Zones which enables these high availability designs to function seamlessly ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 20 In addition to using a multiple Availability Zone design it is also possible to use multiple Regions O ne useful DR pattern is to host an instance or instances of Neo4j in other AWS Region s in slave and read only mode All slaves in Neo4j whether they are r eadwrite read only or slave only are replicated asynchronously from the master This asynchronous replication allows for regional diversity and availability of the database A sync hronous replication across Regions is quite normal with Neo4j However typically one Region is designated as the master R egion and other Regions are designated as s lave Regions that only contain slave only + read only instances In the extremely rare event of a region al failure there is an administrative procedure to change one of the slave only Regions to be th e master It is important to note that slave and read only instances never volunteer to take on important roles in the Neo4j HA cluster but they are fed a stream of transactions from that cluster This means that such instances can be used as a means of keeping a live backup of a cluster with a minimal downtime window between disaster and recovery On disaster we simply take the data store directory from one of the remote DR instances and seed a new cluster Conclusion The AWS C loud provides a unique platform for running Neo4j clusters at scale With capacities that can meet d ynamic needs costs based on usage and easy integration with other AWS services such as Amazon CloudWatch AWS CloudFormation Amazon EBS and Amazon S3 the AWS Cloud enables you to reliably run Neo4j at full scale without having to manage the hardware yourself By using AWS services to complement the Neo4j graph database AWS provides a convenient platform for developing scalable high performance applications atop Neo4j Customers who are interested in deploying Neo4j Enterprise on AWS now have access to a broad set of services beyond Amazon EC2 such as Elastic Load Balanc ing Amazon EBS Amazon CloudWatch and Amazon S3 The combination of t hese services enable the creation of a reliable secure cost effective and performance oriented graph database Contributors The following individuals and organizations contributed to th is document: ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 21 • Justin De Castri Solutions Architect AWS • David Fauth Field Engineer Neo Technology • Ian Robinson Engineer Neo Technology • Jim Webber Chief Scientist Neo Technology Further Reading In addition to the depth of high quality information ava ilable on AWS there are several books on Neo4j that can help you get started with the database: Graph Databases (O’Reilly) : full e book version available for free at http://graphdatabasescom Learning Neo4j (Packt) : http://Neoj4com/books/learning Neo4j/ The Neo4j manual ( http:// Neo4j com/docs/stable/ ) has a wealth of information about the Neo4j Cypher que ry language the programmatic APIs and operational surface Notes 1 Availability Zones are distinct geographical locations that are engineered to be insulated from failures in other Availability Zones They use separate power grids ISPs and cooling systems and they are placed on different fault lines and flood plains when possible All of this separation and isolation is designed to deliver a level of protection from the failure of a single instance to the failure of an entire Availability Zone 2 This may change in future versions of Neo4j Distributed transaction processing which is at the heart of Neo4j clustering is a fast moving area in computer science and the Neo4j team is very much involved with developing novel protocols for future releases 3 http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/Q uickStartEC2Instancehtml ArchivedAmazon Web Services – Running Neo4j Graph Databases on AWS Page 22 4 http://docsawsamazoncom/Amazo nCloudWatch/latest/DeveloperGuide/E C2NewInstanceCWLhtml 5 https://neo4jcom/developer/kb/amazon cloudwatch configuration for neo4j logs/
|
General
|
consultant
|
Best Practices
|
SaaS_Solutions_on_AWS_Tenant_Isolation_Architectures
|
ArchivedSaaS Solutions on AWS Tenant Isolation Architectures January 2016 This paper has been archived For the most update content see https://d1awsstaticcom/whitepapers/saastenant isolationstrategiespdfArchived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Common Solution Components 1 Security and Networking (Tenant Isolation Modeling) 1 Identity Management User Authentication and Authorization 2 Monitoring Logging and Application Performance Management 2 Analytics 3 Configuration Management and Provisioning 4 Storage Backup and Restore Capabilities 4 AWS Tagging Strategy 5 Chargeback Module 6 SaaS Solutions – Tenant Isolation Architecture Patterns 7 Model # 1 – Tenant Isolation at the AWS Account Layer 8 Model # 2 – Tenant Isolation at the Amazon VPC Layer 11 Model # 3 – Tenant Isolation at Amazon VPC Subnet Layer 14 Model # 4 – Tenant Isolation at the Container Layer 15 Model # 5 – Tenant Isolation at the Application Layer 17 General Recommendations 20 Conclusion 21 Contributors 22 Further Reading 22 APN Partner Solutions 22 Additional Resources 23 Archived Abstract Increasingly the mode of delivery for enterprise solutions is turning toward the software as a service (SaaS) model but architecting a SaaS solution can be challenging There are multiple aspects that need to be taken care of and a variety of options for deploying SaaS solutions on AWS This paper covers the different SaaS deployment models and the combination of AWS services and AWS Partner Network (APN) partner solutions that can be used to achieve a scalable available secure performant and costeffective SaaS offering AWS now offers a structured AWS SaaS Partner Program to help you build launch and grow SaaS solutions on AWS As your business evolves AWS will be there to provide the business and technical enablement support you need Please review the SaaS Partner Program website for more details1 ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 1 of 26 Introduction There are a variety of solutions that can be deployed in a SaaS model and these share a number of similarities and common patterns In this paper we will discuss: • Common solution components – These are aspects that we recommend handling separately from the core solution related functio nal components such as billing monitoring and analytics We will discuss these components in detail • SaaS solution tenant isolation architecture patterns – A solution can be deployed in multiple ways on AWS We will discuss typical models that help with the requirements around a multi tenant SaaS deployment along with considerations for each of those cases This white paper focuses on the technology and architecture aspects of SaaS deployments and does not attempt to address business and process related aspects such as software vendor licensing SLAs pricing models and DevOps practice considerations Common Solution Components In addition to building the core functional components of your SaaS solution we highly recommend that you buil d additional supporting components that will help in future proofing your solution and making it easier to manage Building additional supporting components will also enable you to easily grow and add more tenants over time The following sections discuss some of the recommended supporting components for SaaS solution setups Security and Networking (Tenant Isolation Modeling) The first step in any multi tenant system design is to define a strategy to keep the tenants secure and isolated from one another This may include security considerations such as defining segregation at the network/storage layer encrypting data at rest or in transit managing keys and certificates safely and even managing application level security constructs There are a number of AWS services you can use to help address security considerations at each level including AWS CloudHSM AWS CloudTrail Amazon VPC AWS WAF Amazon Inspector Amazon CloudWatch and Amazon CloudWatch Logs 2 By using ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 2 of 26 native AWS services such as these you can define a model that matches the solution’s security and networking requirements In addition to AWS native services many customers also make use of APN Partner offerings in the infrastructure security space to augment their security posture and add capabilities like intrusion detection systems (IDS)/intrusion prevention systems (IPS)3 Identity Management User Authentication and Authorization It’s important to decide on the strategy for authenticating and authorizing users to manage both the AWS services and the SaaS application itself For AWS services you can use AWS Identity and Access Management (IAM) users IAM roles Amazon Elastic Compute Cloud (Amazon EC2) roles social identities directory/ LDAP users and even federated identities using SAML based integrations4 Likewise for your application you have multiple ways to authenticate users We recommend building a layer that supports your application authentication requirements You might consider Amazon Cognito based authentication for mobile users and you can also look to APN Partner offerings in the identity and access control space for managing authentication across different identity providers5 Monitoring Logging and Application Performance Management You should have monitoring enabled at multiple layers not only to help diagnose issues but also to enable proactive measures to avoid issues down the road You can benefit from utilizing the data from Amazon CloudWatch which enables detailed monitoring for critical infrastructure and lets you confi gure alarms to notify you of any issues6 You could also make use of AWS Config that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance7 For application level monitoring you could use the Amazon CloudWatch Logs functionality to stream the logs in real time to the service; in addition you can search for patterns and you can also track the number of errors that occur in your application logs and configure Amazon CloudWatch to send you a notification whenever the rate of errors exceeds a threshold you specify Many ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 3 of 26 companies also use APN Partner offerings in the logging and monitoring space to monitor application performance aspects8 Analytics Most SaaS solutions have a wealth of raw data including application logs user access logs and billing related data which generally can provide a lot of insight if properly analyzed In addition to batch oriented analysis you can do real time analytics to see what kind of actions are being invoked by various tenants on the platfor m or look at realtime infrastructure related metrics to detect any unexpected behavior and to preempt any future problems You can use AWS services such as Amazon Elastic MapReduce (Amazon EMR) Amazon Redshift Amazon Kinesis Amazon Machine Learning Amazon QuickSight Amazon Simple Storage Service (Amazon S3) and Amazon EC2 Spot Instances to build these types of capabilities9 Analytics is normally an ancillary function of a platform in the early stages but as soon as multiple tenants are on boarded to a SaaS platform analytics quickly becomes a core function for detecting and understanding usage patterns providing recommenda tions and driving decisions We recommend that you plan for this layer early in the solution development cycle Figure 1 shows some of the AWS big data services and their capabilities ranging from data ingestion to storage to data analytics/processing Figure 1: AWS Big Data and analytics services ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 4 of 26 Configuration Management and Provisioning AWS provides a number of possibilities for automating solution deployments You have the ability to bake some deployment tasks within the Amazon Machine Images (AMIs) themselves and you can automate more configurable or frequent changes using various other means: One time tasks like OS hardening or setting up specific versions of runtime environments that do not change without an application recertification process (like a Java upgrade) or even time consuming installations (like middleware/database setup) can be baked into the AMI itself To handle more frequently changing aspects of deployment like code updates from a code repository boot time tasks (like joining a domain/cluster) and certain environment specific configurations (like different parameters for dev/test/production) you can use custom scripts in the EC2 instance’s user data section or AWS services such as AWS CodeCommit AWS CodePipeline and AWS CodeDeploy 10 For complete stack spin up a higher level of automation can be achieved by using AWS CloudFormation which gives developers and systems administrators an easy way to create and manage a collection of related AWS resources and enables them to provision and update those resources i n an orderly and predictable fashion11 Depending on your requirements AWS Elastic Beanstalk and AWS OpsWorks can also help with quick deployments and automation12 With the right mix of segregation across different types of tasks you can achieve the correct balance between faster boot time (often needed for auto scaled layers) and a configurable automated setup (needed for flexible deployments) Storage Backup and Restore Capabilities Most AWS services have mechanisms in place to perform backup so that you can revert to a last known stable state if any newer changes need to be backed out Features including Amazon EC2 AMI creation or snapshotting (Amazon EBS Amazon RDS and Amazon Redshift snapshots) can potentially support a majority of backup requirements However for advanced needs such as the need to quiesce a file system and then take a consistent snapshot of an active ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 5 of 26 database you can use third party backup tools many of which are available on AWS Marketplace 13 AWS Tagging Strategy To help you manage instances images and other Amazon EC2 resources you can assign your own metadata to each resource in the form of tags We recommend that you adopt a tagging strategy before you begin to roll out your SaaS solution Each tag consists of a key and an optional value both of which you define You can also have multiple tags on a single resource There are two main uses of tags: 1 General management of resources: Tags enable you to categorize your AWS resources in different ways such as by purpose owner or environment This can simplify filtering and searching across different resources You can also use resource groups to create a custom console that organizes and consolidates the information you need based on your project and the resources you use14 You can also create a resource group to view resources from different regions on the same screen as shown in Figure 2 Figure 2: AWS reso urce grou ps ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 6 of 26 2 Billing segregation: Tags enable cost allocation reports and allow you to get cost segregation based on a particular business unit or environment depending on the tagging strategy used15 This along with AWS Cos t Explorer can greatly simplify the billing data related visibility & reports16 Chargeback Module Another important aspect of a multi tenant system is cost segregation across tenants based on their usage From an AWS resources perspective tagging can be a great resource to help you separate out usage at a macro level However for most SaaS solutions greater controls are needed for usage monitoring so we recommend that you build your own custom billing module as needed A billing module could look like the high level generic example shown in Figure 3 Figure 3: Sample metering and chargeback module • All of the resources that are launched stopped and terminated are tracked and the data is then sent to an Amazon Kinesis stream • Granular measurements such as the number of API requests made or the time taken to process any request are tracked and the data is then fed into the Kinesis stream in real time ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 7 of 26 • Two types of consumer applications can process the data stored in Amazon Kinesis: o A consumer fleet that generates real time metrics on how the system is being utilized by various tenants This may help you make decisions such as whether to throttle a particular tenant’s usage or perform other corrective actions based on real time feeds o A second set of a Kinesis consumer fleet could aggregate the continuous feed and generate monthly or quarterly usage reports for billing It could also provide usage analytics for each tenant by processing the raw data and storing it in Amazon Redshift For historical data processing or transformation Amazon EMR can be used SaaS Solutions – Tenant Isolation Architecture Patterns There are multiple approaches to deploying a packaged solution on AWS rangin g from a fully isolated deployment to a completely shared SaaS type architecture with many other deployment options in between In order to support any of the deployment options the solution or application itself should be able to support that SaaS multi tenancy model which is the basic assumption we will take here before diving deep into AWS specific components of different deployment models The decision to pick a particular AWS deployment model depends on multiple criteria including: • Level of segregation across tenants and deployments • Application scalability aspects across tenant specific stacks • Level of tenant specific application customizations • Cost of deployment • Operations and management efforts • End tenant metering and billing aspects The different choices are a “Rubik’s cube” of options that impact one another in potentially unforeseen ways The goal of this paper is to help with these multi ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 8 of 26 dimensional unforeseen impacts The following sections describe some of the SaaS deployment model s on AWS and include a pros and cons section for each option to help guide you to the optimal solution given your business and technical requirements as below: Model #1 – Tenant Isolation at the AWS Account Layer Model #2 – Tenant Isolation at the Amazon VPC Layer Model #3 – Tenant Isolation at Amazon VPC Subnet Layer Model #4 – Tenant Isolation at the Container Layer Model #5 – Tenant Isolation at the Application Layer Model # 1 – Tenant Isolation at the AWS Account Layer In this model all the tenants will have their individual AWS accounts and will be isolated to an extent In essence this is not truly a multi tenant SaaS solution but can be treated as a managed solution on AWS Figure 4: Tenant isolation at AWS account layer Pros: • Tenants are completely separated out and they do not have any overlap which can provide each tenant with a greater sense of security ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 9 of 26 • Solution or general configuration customizations are easy because every deployment is specific to a tenant (or organization) • It’s easy to track AWS usage because a separate monthly bill is generated for each tenant (or organization) Cons: • This option lacks the resources and cost optimizations that can be achieved by the economies of scale provided by a multi tenant SaaS model • With a large number of tenants it can become challenging to manage separate AWS a ccounts and individual tenant deployments from an operations perspective • As a best practice all the AWS account root logins should be multi factor authentication (MFA) enabled With ever increasing individual tenant accounts it becomes difficult to manage all the MFA devices Best Practices: • Centralized operations and management – IAM supports delegating access across AWS accounts for accounts you own using IAM roles 17 Using this functionality you can manage all tenants’ AWS accounts through your own common AWS account by assuming roles to perform variou s actions (such as launching a new stack using AWS CloudFormation or updating a security group configuration) instead of having to log in to each AWS account individually You can utilize this functionality by using the AWS Management Console AWS Command Line Interface (AWS CLI) and the API18 Figure 3 provides a snapshot of how to set this up from the AWS Management Console ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 10 of 26 Figure 5: Cross account IAM r olebased access setup Figure 6: Cross account IAM r olebased access switching • Consolidated AWS billing – You can use the Consolidated Billing feature to consolidate payment for multiple AWS accounts within your organization by designating one of them to be the payer account 19 With Consolidated Billing you can see a combined view of AWS charges ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 11 of 26 incurred by all accounts and you can get a detailed cost report for each individual AWS account associated with your payer account Figure 7: AWS consolidated billing • VPC peering – If you would like to have a central set of services (say for backup anti virus OS patching and so on) you can use a VPC peering connection in the same AWS region between your common AWS account that has these shared services and the respective tenant’s AWS account However note that you are charged for data transfer within a VPC peering connection at the same rate as data transfer across Availability Zones Theref ore you should factor this cost into the solution’s overall cost modeling exercise Model # 2 – Tenant Isolation at the Amazon VPC Layer In this model all the tenant solution deployments are in the same AWS account but the level of separation is at the VPC layer For every tenant deployment there’s a separate VPC which provides logical separation between tenants ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 12 of 26 Figure 8: Tenant isolation at VPC layer Pros: • Everything is in a single account so this model is easier to manage than a multi account setup • There’s appropriate isolation between different tenants because each one lives in a different VPC • Compared with the previous model this model provides better economies of scale and improved utilization of Amazon EC2 Reserved Instances because all reservations and volume pricing constructs are applicable on the same AWS account However if Consolidated Billing is used this model provides no advantage over the previous model because Consolidated Billing treats all the accounts on the consolida ted bill as one account Cons: • Amazon VPC related limits will have to be closely monitored both from an overall account perspective and from each tenant’s VPC perspective ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 13 of 26 • If all the VPCs need connectivity back to an on premises setup then managing individual VPN connections may become a challenge • Even though it’s the same account if a shared set of services needs to be provided (such as backups anti virus updates OS updates and so forth) then VPC peering will need to be set up from the shared services VPC to all tenant VPCs • Security groups are tied to a VPC so depending on the deployment architecture you may have to create and manage multiple security groups for each VPC • AWS supports tagging as described in the Amazon EC2 documentation 20 However if you need to separate usage and costs for services and resources beyond the available tagging support you shoul d either build a custom chargeback layer or have a separate AWS account strategy to help clearly demarcate individual tenant usage Best Practices: In this setup use tags to separate out AWS costs for each of the tenant deployments You can define resource groups and manage tags there instead of managing them at the individual resource level21 Once you have defined the tagging strategy you can use the monthly cost allocation reports to view a breakup of AWS costs by tags and segregate it as per your needs (see the sample report in Figure 9)22 Figure 9: Sample cost allocation report ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 14 of 26 Model # 3 – Tenant Isolation at Amazon VPC Subnet Layer In this model we will discuss the case where we have a single AWS account and a single VPC for all tenant deployments The isolation happens at the level of subnets and each tenant has their own separate version of an application or solution with no sharing across tenants Figure 10 illustrates this type of deployment Figure 10: Tenant isolation at VPC subnet layer Pros: • You don’t need to set up VPC peering for intercommunication • VPN and AWS Direct Connect connectivity to a single on premises site is simplified as there is a single VPC23 Cons: • Isolation between tenants has to be managed at the subnet level so Amazon VPC network access control lists (NACLs) and security groups need to be carefully managed • VPC limits are harder to manage as the number of tenants increases Furthermore you can provision only a few subnets under the VPC CIDR ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 15 of 26 (Classless Inter Domain Routing) depending on its size and the CIDR cannot be resized once created • Changing a VPC level setting (say DHCP options set) affects all tenants although they have their individual deployments • There are limits on the number of security groups and the number of rules per security group at the VPC level so managing those limits with multiple tenants in the same VPC may be complicated Best Practices: • To access public AWS service endpoints (like Amazon S3) utilize VPC endpoints This will scale better than routing the traffic for multiple tenants through a network address translation (NAT) instance • To avoid hitting security group related limits in a VPC: o Consolidate security groups to stay under the limit o Don’t use security group cross references; instead refer to CIDR ranges Model # 4 – Tenant Isolation at the Container Layer With the advent of container based deployment it is now possible to have a single instance and slice it for multiple tenant applications based on requirem ents The Amazon EC2 Container Service (Amazon ECS) helps easily set up and manage Docker container based deployments and could be used to deploy tenant specific solution components in individual containers24 Figure 11 illustrates a scenario where different tenants’ containers are deployed in the same VPC ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 16 of 26 Figure 11: Tenant isolation at container layer Pros: • You can have a higher level of resource utilization by having a container based model on shared instances • It’s easier to manage the clusters at scale as Amazon ECS takes away the heavy lifting involved in terms of cluster management and general fault tolerance • Simplified deployments are possible by testing a Docker image on any test/development environment and then using simple CLI based options to directly put it into production • Amazon ECS deploys images on your own Amazon EC2 instances which can be further segmented and controlled using VPC based controls This along with Docker’s own isolation model meets the security requirements of most multi tenant applications Cons: • You can use Amazon EC2 and VPC security groups to limit the traffic on an Amazon EC2 instance However you need to manage the container configuration to control whic h ports are open Managing those aspects may become a little tedious at scale • Tags do not work at the Amazon ECS task (container) level so separating costs based on tags will not work and a custom billing layer will be needed ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 17 of 26 Best Practices: • To secure container communication beyond the controls provided by VPC security groups you could create a software defined network for the containers using point topoint tunneling with Generic Routing Encapsulation (GRE) to route traffic between the container based subnets • In order to architect auto scaling functionality using Amazon ECS use a combination of Amazon CloudWatch and AWS Lambda based container deployment25 In this setup an AWS Lambda function is triggered by an Amazon CloudWatch alarm to automatically add another Amazon ECS task to dynamically scale as shown in Figure 12 Figure 12: Autoscaling architecture for container based deployment Model # 5 – Tenant Isolation at the Application Layer This model represents a major shift from the earlier discussed models; now the application or solution deployment is shared across different tenants This is a radical change and a movement toward a true multi tenant SaaS model However to achieve this model the application itself should be designed to ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 18 of 26 support multi tenancy For example if we take a typical 3tier application with shared web and application layers there can be some subtle variations at the database layer (which for example could be either Amazon RDS or a database on an Amazon EC2 instance): 3 Separate databases: Each tenant will have a different database for maximum isolation To enable the application layers to pick up the right database upon each tenant’s request you will need to maintain metadata in a separate store (such as Amazon DynamoDB) where mapping of a tenant to its database is managed 4 Separate tables/schemas : Different database flavors have different constructs but another possible dep loyment model could be that all tenants’ data resides in the same database but the data is tied to different schemas or tables to provide a level of isolation 5 Shared database shared schema/tables: In this model all tenants’ data is placed together A unique tenant ID column separates data records for each tenant Whenever a new tenant needs to be added to the system a new tenant ID is generated additional capacity is provisioned and traffic routing is started to an existing or new stack Pros: • You c an achieve economies of scale and better resource usage and optimization across the entire stack As a result this can often be the cheapest option to operate at scale when you have shared components across the architecture o For example having a huge multi tenant Amazon DynamoDB table that can absorb the request spikes can be much cheaper than having higher provisioned Amazon DynamoDB tables for individual tenants • It’s easy to manage and operate the stack because it is a single deployment Any change s or enhancements that need to be made are rolled out at once rather than having to manage n different environments • Network connectivity is simplified and the challenges around the VPC limits with other models are also subdued because it’s a single VPC deployment (although it may be bigger in size) • All shared services (such as patching OS updates and antivirus) are also centralized and deployed as a single unit for all the tenants ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 19 of 26 Cons: • Applications need to be multi tenant aware so existing applic ations may have to be re architected • Depending on certain compliance and security requirements cohosting tenants with different security profiles may not be possible Best Practices: • To implement this model successfully consider the following important aspects: • Often times different tenants have their own specific needs for certain features or customizations: o Try to group tenants according to their requirements; tenants with similar needs should be put on the same deployment o Try to build the most asked for features in the core platform or application itself and avoid customizations at the tenant level for long term maintainability • Closely monitor the stack for each tenant’s activities If necessary you should be able to throttle or deprioritize any particular tenant’s actions to avoid affecting other tenants adversely • Ensure that you have the ability to scale the stacks up and down automatically to address the changing needs of the tenants on a particular stack This should be built into the ar chitecture rather than being done by manual updates • Use role based and fine grained access controls to enable access to limit a tenant’s access across the entire stack Amazon DynamoDB provides fine grained access controls which enable you to determine who can access individual data items and attributes in Amazon DynamoDB tables and indexes and the actions that can be performed on them Using Amazon DynamoDB in SaaS architectures can greatly reduce complexities • Another important aspect to handle is the AWS cost management across tenants according to their usage To handle this we recommend that you design a custom billing layer (as explained and outlined in previous sections) and incorporate it in the solution ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 20 of 26 General Recommendations Consider the following general best practices for a packaged SaaS solution design and delivery on AWS: • Instead of building large monolithic application architectures it’s often helpful to create smaller independent single responsibility services that can be clubbe d together to achieve the overall business functionality These smaller microservices based architectures can be easier to manage and can independently scale You could use services like Amazon ECS and AWS Lambda to create these smaller components AWS Simple Queue Service (Amazon SQS) could also potentially help decouple microservices by introducing a queuing layer in between for communication26 You can also use Amazon API Gateway to enable API based interactions between the layers thereby keeping them integrated just at the interface layer27 To learn more about this microservices based architecture pattern see the blog post SquirrelBin: A Serverless Microservice Using AWS Lambda 28 • Build abstraction at each layer so that you can future proof your solution by being able to change the underlying implementation without affecting the public interfaces Consider aspects such as where you want the solution to be in next few years and think about technology trends For example mobile was not as big five years ago as it is today Plan for the future and design your solution in a manner that is scalable and extensible to meet future needs • Define a release management process to enable f requent quality updates to the solution AWS CodeCommit AWS CodePipeline and AWS CodeDeploy can help with this aspect of your deployment • Keep tenant specific customizations to a minimum and try to build most of the features within the platform itself For tenant specific configuration metadata AWS DynamoDB can be useful • Build an API for your solution or platform if it needs to integrate with third party systems • Use IAM roles for Amazon EC2 instead of using hard coded credentials within various application components ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 21 of 26 • Find ways to cost optimize your solution For instance you can use Reserved or Spot Instances adopt AWS Lambda to design an event driven architecture or use Amazon ECS to containerize smaller functional blocks • Utilize Auto Scaling to dynamically scale your environment up and down as per load • Benchmark application performance to right size your Amazon EC2 instances and their count • Make use of AWS Trusted Advisor recommendations to further optimize your AWS deployment29 • There are often custom capabilities that you may like to build into your platform that could be supplied by a packaged solution from an APN Technology Partner Look for opportunities to pick and choose what to build on your own versus utilizing an existing solution Leverage various APN Partner solutions and offerings and AWS Marketplace to augment the features and functionalities provided by AWS services • Enroll in the AWS SaaS Partner Program to learn build and grow your SaaS business on AWS30 • It’s important to ensure that your solution can be effectively managed on AWS by your firm Another option is to work with an AWS MSP Consulting Partner 31 • Validate your operational model using the AWS operational checklist 32 • Validate your security model using the AWS auditing security checklist 33 • Leverage various APN Partner solutions and offerings and AWS Marketplace to augm ent the features and functionalities provided by AWS services Conclusion Every packaged SaaS solution is different in nature but they share common ingredients You can use the practices and architecture methodologies described in this paper to deploy a scalable secure optimized SaaS solution on AWS The paper describes different models you can adopt Depending on the type of SaaS solution you’re building using multiple models or even a hybrid approach may suit your needs ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 22 of 26 Contributors The following individuals and organizations contributed to this document: • Kamal Arora Solutions Architect Amazon Web Services • Tom Laszewski Sr Manager Solutions Architects Amazon Web Services • Matt Yanchyshyn Sr Manager Solutions Architects Amazon Web Services Further Reading APN Partner Solutions In order to build out various functions in a custom SaaS solution you will likely want to integrate with popular ISV solutions across various functions To make your selection easy the APN has developed the AWS Competency Program designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas34 Below are some of the AWS Comp etency solution pages which you can refer to for more details: • DevOps : https://awsamazoncom/devops/partner solutions/ • Mobile : https://awsamazoncom/mobile/partner solutions/ • Security : https://awsamazoncom/security/partner solutions/ • Digital Media: https://awsamazoncom/partners/competencies/digital media/ • Marketing & Commerce : https://awsamaz oncom/digital marketing/partner solutions/ • Big Data: https://awsamazoncom/partners/competencies/big data/ • Storage : https://awsamazoncom/backup recovery/partner solutions/ • Healthcare : https://awsamazoncom/partners/competencies/healthcare/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 23 of 26 • Life Sciences : https://awsamazoncom/partners/competencies/life sciences/ • Microsoft Solutions : https://awsamazoncom/pa rtners/competencies/microsoft/ • SAP Solutions : https://awsamazoncom/partners/competencies/sap/ • Oracle Solutions: https://awsamazoncom/partners/competencies/oracle/ • AWS Managed Service Program: http://awsamazoncom/partners/managed service/ • AWS SaaS Partner program: http://awsamazoncom/partners/saas/ Additional Resources • Details on various AWS usage and billing reports: http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/billing what ishtml • Amazon EC2 IAM roles: http://docsawsamazoncom/AWSEC2/latest/UserGuide/i amroles for amazon ec2html • Auto scaling Amazon ECS services using Amazon CloudWatch and AWS Lambda: https://awsamazoncom/blogs/compute/scaling amazon ecs services automatically using amazon cloudwatch and awslambda/ • Working with Tag Editor: http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/tag editorhtml • Working with resource groups: http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/resource groupshtml • Backup archive and restore approaches on AWS: https://d0awsstaticcom/whitepapers/Storage/Backup_Archive_and_R estore_Approaches_Using_AWSp df ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 24 of 26 Notes 1 http://awsamazoncom/partners/saas/ 2 https://awsamazoncom/cloudhsm/ https://awsamazoncom/cloudtrail/ https://awsamazoncom/vpc/ https://awsamazoncom/waf/ https://awsamazoncom/inspector/ https://awsamazoncom/cloudwatch/ http://docsawsamazoncom/AmazonCloudWatch/latest/logs/WhatIsCloud WatchLogshtml 3 https://awsamazoncom/security/partner solutions/#infrastructure 4 https://awsamazoncom/iam/ 5 https://awsamazoncom/cognito/ https://awsamazoncom/security/partner solutions/#iac 6 https://awsamazoncom/cloudwatch/ 7 https://awsamaz oncom/config/ 8 https://awsamazoncom/security/partner solutions/#log monitor 9 https://awsamazoncom/elasticmapreduce/ https://awsamazoncom/redshift/ https://awsamazoncom/kinesis/ https://awsamazoncom/machine learning/ https://awsamazoncom/quicksight/ https://awsamazoncom/s3/ https://awsamazoncom/ec2/spot/ 10 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance metadatahtml https://awsamazoncom/codecommit/ https://awsamazoncom/codepipeline/ https://awsamazoncom/codedeploy/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 25 of 26 11 https://awsamazoncom/cloudformation/ 12 https://awsamazoncom/elasticbeanstalk/ https://awsamazoncom/opsworks/ 13 https://awsamazoncom/marketplace/ 14 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 15 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost alloc tagshtml 16 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost explorer what ishtml 17 http://docsawsamazoncom/IAM/latest/UserGuide/roles walkthrough crossaccthtml 18 https://awsamazoncom/console/ https://awsamazoncom/cli/ 19 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/consolidated billinghtml 20 http://docsawsamazoncom/AWSEC2/latest/UserGuide/Using_Tagshtml#t agrestrictions 21 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EC2_Resourcesht ml 22 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/cost alloc tagshtml 23 https:/ /awsamazoncom/directconnect/ 24 https://awsamazoncom/ecs/ 25 https://awsamazoncom/lambda/ 26 https://awsamazoncom/ecs/ https://awsamazoncom/lambda/ 27 https://awsamazoncom/api gateway/ ArchivedAmazon Web Services – SaaS Solutions on AWS : Tenant Isolation Architectures Page 26 of 26 28 https://awsamazoncom/blogs/compute/the squirrelbin architecture a serverless microservice using awslambda/ 29 https://awsamazoncom/trusted advisor/ 30 http://awsamazoncom/partners/saas/ 31 http://awsamazonc om/partners/managed service/ 32 https://mediaamazonwebservicescom/AWS_Operational_Checklistspdf 33 https://d0awsstaticcom/whitepapers/compliance/AWS_Auditing_Security_ Checklistpdf 34 https://awsamazoncom/partners/competencies/
|
General
|
consultant
|
Best Practices
|
SaaS_Storage_Strategies_Building_a_Multitenant_Storage_Model_on_AWS
|
This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/multi tenantsaasstoragestrategies/multitenantsaasstorage strategieshtml SaaS Storage Strategies Building a multitenant storage model on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS SaaS Storage Strategies: Building a multitenant storage model on AWS Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Table of Contents Abstract and introduction i Abstract 1 Are you WellArchitected? 1 Introduction 1 SaaS partitioning models 3 Silo model 3 Bridge model 3 Pool model 4 Setting the backdrop 4 Finding the right fit 5 Assessing tradeoffs 5 Pros 6 Cons 6 Pool model tradeoffs 6 Pros 7 Cons 7 Hybrid: The business compromise 7 Data migration 9 Migration and multitenancy 9 Minimizing invasive changes 9 Security considerations 10 Isolation and security 10 Management and monitoring 11 Aggregating storage trends 11 Tenantcentric views of activity 11 Policies and alarms 11 Tiered storage models 12 The developer experience 13 Linked account silo model 14 Multitenancy on DynamoDB 15 Silo model 15 Bridge model 16 Pool model 16 Managing shard distribution 19 Dynamically optimizing IOPS 19 Supporting multiple environments 19 Migration efficiencies 19 Weighing the tradeoffs 19 Multitenancy on RDS 21 Silo model 21 Bridge model 22 Pool model 23 Factoring in single instance limits 24 Weighing the tradeoffs 25 Multitenancy on Amazon Redshift 26 Silo model 26 Bridge model 26 Pool model 27 Keeping an eye on agility 28 Conclusion 29 Contributors 30 Document revisions 31 Notices 32 AWS glossary 33 iii This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Abstract SaaS Storage Strategies Publication date: May 6 2021 (Document revisions (p 31)) Abstract Multitenant storage represents one of the more challenging aspects of building and delivering Software as a Service (SaaS) solutions There are a variety of strategies that can be used to partition tenant data each with a unique set of nuances that shape your approach to multitenancy Adding to this complexity is the need to map each of these strategies to the different storage models offered by AWS such as Amazon DynamoDB Amazon Relational Database Service (Amazon RDS) and Amazon Redshift Although there are highlevel themes you can apply universally to these technologies each storage model has its own approach to scoping managing and securing data in a multitenant environment This paper offers SaaS developers insights into a range of data partitioning options allowing them to determine which combination of strategies and storage technologies best align with the needs of their SaaS environment Are you WellArchitected? The AWS WellArchitected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable secure efficient costeffective and sustainable systems Using the AWS WellArchitected Tool available at no charge in the AWS Management Console you can review your workloads against these best practices by answering a set of questions for each pillar In the SaaS Lens we focus on best practices for architecting your software as a service (SaaS) workloads on AWS For more expert guidance and best practices for your cloud architecture—reference architecture deployments diagrams and whitepapers—refer to the AWS Architecture Center Introduction AWS offers Software as a Service (SaaS) developers a rich collection of storage solutions each with its own approach to scoping provisioning managing and securing data The way that each service represents indexes and stores data adds a unique set of considerations to your multitenant strategy As a SaaS developer the diversity of these storage options represents an opportunity to align the storage needs of your SaaS solution with the storage technologies that best match your business and customer needs As you weigh AWS storage options you must also consider how the multitenant model of your SaaS solution fits with each storage technology Just as there are multiple flavors of storage there are also multiple flavors of multitenant partition strategies The goal is to find the best intersection of your storage and tenant partitioning needs This paper explores all the moving parts of this puzzle It examines and classifies the models that are typically used to achieve multitenancy and helps you weigh the pros and cons that shape your selection of a partitioning model It also outlines how each model is realized on Amazon RDS Amazon DynamoDB and Amazon Redshift As you dig into each storage technology you’ll learn how to use the AWS constructs to scope and manage your multitenant storage 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Introduction Although this paper gives you general guidance for selecting a multitenant partitioning strategy it’s important to recognize that the business technical and operational dimensions of your environment will often introduce factors that will also shape the approach you select In many cases SaaS organizations adopt a hybrid of the variations described in this paper 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model SaaS partitioning models To get started you need a welldefined conceptual model to help you understand the various implementation strategies The following figure shows the three basic models—silo bridge and pool—that are commonly used when partitioning tenant data in a SaaS environment Each partitioning model takes a very different approach to managing accessing and separating tenant data The following sections give a quick breakdown of the models giving you the ability to explore the values and tenets of each model outside of the context of any specific storage technology SaaS partitioning models Silo model In the silo model storage of tenant data is fully isolated from any other tenant data All constructs that are used to represent the tenant’s data are considered logically “unique” to that client meaning that each tenant will generally have a distinct representation monitoring management and security footprint Bridge model The bridge model often represents an appealing compromise for SaaS developers Bridge moves all of the tenant’s data into a single database while still allowing some degree of variation and separation for each tenant Typically you achieve this by creating separate tables for each tenant and allow each of which is allowed table to have its own representation of data (schema) 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model Pool model The pool model represents the allin multitenant model where tenants share all of the system’s storage constructs Tenant data is placed into a common database and all tenants share a common representation (schema) This requires the introduction of a partitioning key that is used to scope and control access to tenant data This model tends to simplify a SaaS solution’s provisioning management and update experience It also fits well with the continuous delivery and agility goals that are essential to SaaS providers Setting the backdrop The silo bridge and pool models provide the backdrop for our discussion As you dig into each AWS storage technology you’ll discover how the conceptual elements of these models are realized on a specific AWS storage technology Some map very directly to these models; others require a bit more creativity to achieve each type of tenant isolation It’s worth noting that these models are all equally valid Although we’ll discuss the merits of each the regulatory business and legacy dimensions of a given environment often play a big role in shaping the approach you ultimately select The goal here is to simply bring visibility to the mechanics and tradeoffs associated with each approach 4 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Assessing tradeoffs Finding the right fit Selecting a multitenant partitioning storage model strategy is influenced by many different factors If you are migrating from an existing solution you might favor adopting a silo model because it creates the simplest and cleanest way to transition to multitenancy without rewriting your SaaS application If you have regulatory or industry dynamics that demand a more isolated model the efficiency and agility of the pool model might unlock your path to an environment that embraces rapid and continual releases The key here is to acknowledge that the strategy you select will be driven by a combination of the business and technical considerations in your environment In the following sections we highlight the strengths and weaknesses of each model and provide you with a welldefined set of data points to use as part of your broader assessment You’ll learn how each model influences your ability to align with the agility goals that are often at the core of adopting a SaaS model When selecting an architectural strategy for your SaaS environment consider how that strategy impacts your ability to rapidly build deliver and deploy versions in a zero downtime environment Assessing tradeoffs If you were to put the three partitioning models—silo bridge and pool—on a spectrum you’d see the natural tensions associated with adopting any one of these strategies The qualities that are listed as strengths for one model are often represented as weaknesses in another model For example the tenets and value system of the silo model are often in opposition to those of the pool model Partitioning model tradeoffs The preceding figure highlights these competing tenets Across the top of the diagram you’ll see the three partitioning models represented On the left are the pros and cons associated with the silo model On the right we provide similar lists for the pool model The bridge model is a bit of a hybrid of these considerations and as such represents a mix of the pros and cons shown at the extremes 5 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pros Silo model tradeoffs Representing tenant data in completely separate databases can be appealing In addition to simplifying migration of existing singletenant solutions this approach also addresses concerns some tenants might have about operating a fully shared infrastructure Pros •Silo is appealing for SaaS solutions that have strict regulatory and security constraints — In these environments your SaaS customers have very specific expectations about how their data must be isolated from other tenants The silo model lets you offer your tenants an option to create a more concrete boundary between tenant data and provides your customers with a sense that their data is stored in a more dedicated model •Crosstenant impacts can be limited — The idea here is that via the isolation of the silo model you can ensure that the activity of one tenant does not impact another tenant This model allows for tenantspecific tuning where the database performance SLAs of your system can be tailored to the needs of a given tenant The knobs and dials that are used to tune the database also generally have a more natural mapping to the silo model which makes it simpler to configure a tenantcentric experience •Availability is managed at the tenant level minimizing tenant exposure to outages — With each tenant in their own database you don’t have to be concerned that a database outage might cascade across all of your tenants If one tenant has data issues they are unlikely to adversely impact any of the other tenants of the system Cons •Provisioning and management is more complex — Any time you introduce a pertenant piece of infrastructure you’re also introducing another moving part that must be provisioned and managed on a tenantbytenant basis You can imagine for example how a siloed database solution might impact the tenant onboarding experience for your system Your signup process will require automation that creates and configures a database during the onboarding process It’s certainly achievable but it adds a layer of complexity and a potential point of failure in your SaaS environment •Your ability to view and react to tenant activity is undermined — With SaaS you might want a management and monitoring experience that provides a crosstenant view of system health You want to proactively anticipate database performance issues and react with policies in a more holistic way However the silo model makes you work harder to find and introduce tooling to create an aggregated systemwide view of health that spans all tenants •The distributed nature of a silo model impacts your ability to effectively analyze and assess performance trends across tenants — With each tenant storing data in its own silo you can only manage and tune service loads in a tenantcentric model This essentially leads to the introduction of a set of oneoff settings and policies that you have to manage and tune independently This can be both inefficient and could impose overhead that undermines your ability to respond quickly to customer needs •Silo limits cost optimization — Perhaps the most significant downside the oneoff nature of the silo model tends to limit your ability to tune your consumption of storage resources Pool model tradeoffs The pool model represents the ultimate allin commitment to the SaaS lifestyle With the pool model your focus is squarely on having a unified approach to your tenants that lets you streamline tenant storage provisioning migration management and monitoring 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pros Pros •Agility — Once all of your tenant data is centralized in one storage construct you are in a much better position to create tooling and a lifecycle that supports a streamlined universal approach to rapidly deploying storage solutions for all of your tenants This agility also extends to your onboarding process With the pool model you don’t need to provision separate storage infrastructure for each tenant that signs up for your SaaS service You can simply provision your new tenant and use that tenant’s ID as the index to access the tenant’s data from the shared storage model used by all of your tenants •Storage monitoring and management is simpler — In the pool model it’s much more natural to put tooling and aggregated analytics into place to summarize tenant storage activity The everyday tools you’d use to manage a single storage model can be leveraged here to build a comprehensive crosstenant view of your system’s health With the pool model you are in a much better position to introduce global policies that can be used to proactively respond to system events Generally the unification of data into a single database and shared representation simplifies many aspects of the multitenant storage deployment and management experience •Additional options help optimize the cost footprint of your SaaS solutions — The costs opportunities often show up in the form of performance tuning You might for example have throughput optimization that is applied across all tenants as one policy (instead of managing separate policies on a tenantbytenant basis) •Pool improves deployment automation and operational agility — The shared nature of the pool model generally reduces the overall complexity of your database deployment automation which aligns nicely with the SaaS demand for continual and frequent releases of new product capabilities Cons •Agility means a higher bar for managing scale and availability — Imagine the impact of a storage outage in a pooled multitenant environment Now instead of having one customer down all of your customers are down This is why organizations that adopt a pool model also tend to invest much more heavily in the automation and testing of their environments A pooled solution demands proactive monitoring solutions and robust versioning data and schema migration Releases must go smoothly and tenant issues need to be captured and surfaced efficiently •Pool challenges management of tenant data distribution — In some instances the size and distribution of tenant data can also become a challenge with pooled storage Tenants tend to impose widely varying levels of load on your system and these variations can undermine your storage performance The pool model requires more thought about the mechanisms that you will employ to account for these variations in tenant load The size and distribution of data can also influence how you approach data migration These issues are typically unique to a given storage technology and need to be addressed on a casebycase basis •The shared nature of the pooled environment can meet resistance in some domains — For some SaaS products customers will demand a silo model to address their regulatory and internal data protection requirements Hybrid: The business compromise For many organizations the choice of a strategy is not as simple as selecting the silo bridge or pool model Your tenants and your business are going to have a significant influence on how you approach selection of a storage strategy In some cases a team might identify a small collection of their tenants that require the silo or bridge model Once they’ve made this determination they assume that they have to implement all of the storage with that model This artificially limits your ability to embrace those tenants that may be open 7 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Hybrid: The business compromise to a pool model In fact it may add cost or complexity for a tier of tenants that aren’t demanding the attributes of the silo or bridge model One possible compromise is to build a solution that fully supports pooled storage as your foundation Then you can carve out a separate database for those tenants that demand a siloed storage solution The following figure provides an example of this approach in action Hybrid silo/pool storage Here we have two tenants (Tenant 1 and Tenant 2) that are leveraging a silo model and the remaining tenants are running in a pooled storage model This is magically abstracted away by a data access layer that hides developers from the tenant’s underlying storage Although this can add a level of complexity to your data access layer and management profile it can also offer your business a way to tier your offering to represent the best of both worlds 8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Migration and multitenancy Data migration Data migration is one of those areas that is often left out of the evaluation of competing SaaS storage models However with SaaS consider how your architectural choices will influence your ability to continually deploy new features and capabilities Although performance and general tenant experience are important to emphasize it’s also essential to consider how your storage solution will accommodate ongoing changes in the underlying representation of your data Migration and multitenancy Each of the multitenant storage models requires its own unique approach to tackling data migration In the silo and bridge models you can migrate data on a tenantbytenant basis Your organization may find this appealing because it allows you to carefully migrate each SaaS tenant without exposing all tenants to the possibility of a migration error However this approach can introduce more complexity into the overall orchestration of your deployment lifecycle Migrating data in the pool model can be both appealing and challenging In one respect migration in a pool model provides a single point that once migrated has all tenants successfully transitioned to your new data model On the other hand any problem introduced during a pool migration could impact all of your tenants From the outset you should be thinking about how data migration fits into your overall multitenant SaaS strategy If you bake this migration orchestration into your delivery pipeline early you tend to achieve a greater degree of agility in your release process Minimizing invasive changes As a rule of thumb you should have clear policies and tenets to follow as you consider how the data in your systems will evolve Wherever possible teams should favor data changes that have backward compatibility with earlier versions If you can find ways to minimize changes to your application’s data representation you will limit the high overhead of transforming your data into a new representation You can leverage commonly used tools and techniques to orchestrate the migration process In reality while minimizing invasive changes is often of great importance to SaaS developers it’s not unique to the SaaS domain As such it’s beyond the scope of what we’ll cover in this paper 9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Isolation and security Security considerations Data security must be a top priority for SaaS providers When adopting a multitenant strategy your organization needs a robust security strategy to ensure that tenant data is effectively protected from unauthorized access Protecting this data and conveying that your system has employed the appropriate security measures is essential to gaining the trust of your SaaS customers The storage strategies you choose are likely to use common security patterns supported on AWS Encrypting data at rest for example is a horizontal strategy that can be applied universally across any of the models This provides a foundational level of security which ensures that—even if there is unauthorized access to data—it would be useless without the keys needed to decrypt the information Now as you look at the security profiles of the silo bridge and pool models you will notice additional variations in how security is realized with each one You’ll discover that AWS Identity and Access Management (Amazon IAM) for example has nuances in how it can scope and control access to tenant data In general the silo and bridge models have a more natural fit with IAM policies because they can be applied to limit access to entire databases or tables Once you cross over to a pool model you may not be in a position to leverage IAM to scope access to the data Instead more responsibility shifts to the authorization models of your application’s services These services must use a user’s identity to resolve the scope and control they have over data in a shared representation Isolation and security Supporting tenant isolation is fundamental for some organizations and domains The notion that data is separated—even in a virtualized environment—can be seen as essential to SaaS providers that have specific regulatory or security requirements As you consider each AWS storage solution think about how isolation is achieved on each of the AWS storage services As you will see achieving isolation on RDS looks very different from how it does on DynamoDB Consider these differences as you select your storage strategy and assess the security considerations of your customers 10 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Aggregating storage trends Management and monitoring The approach you adopt for multitenant storage can have a significant impact on the management and monitoring profile of your SaaS solution In fact the complexity and approach you take to aggregate and analyze system health can vary significantly for each storage model and AWS technology Aggregating storage trends To build an effective operational view of SaaS storage you need metrics and dashboards that provide you with an aggregated view of tenant activity You have to be able to proactively identify storage trends that could be influencing the experience spanning all of your tenants The mechanisms you need to create this aggregated view look very different in the silo and pool models With siloed storage you must put tooling in place to collect the data from each isolated database and surface that information in an aggregated model In contrast the pool model by its nature already has an aggregated view of tenant activity Tenantcentric views of activity Your management and monitoring storage solution should provide a way to create tenantcentric views of your storage activity If a particular tenant is experiencing a storage issue you’ll want to be able to drill into the storage metrics and profile data to identify what could be impacting that individual tenant Here the silo model aligns more naturally with constructing a tenantcentric view of storage activity A pooled storage strategy will require some tenant filtering mechanism to extract storage activity for a given tenant Policies and alarms Each AWS storage service has its own mechanisms for evaluating and tuning your application’s storage performance Because storage can often represent a key bottleneck of your system you should introduce monitoring policies and alarms that will allow you to surface and respond to changes in the health of your application’s storage The partitioning model you choose will also impact the complexity and manageability of your storage monitoring strategy The more siloed your solution the more moving parts to manage and maintain on a tenantbytenant basis In contrast the shared nature of a pooled storage strategy makes it simpler to have a more centralized crosstenant collection of policies and alarms The overall goal with these storage policies is to put in place a set of proactive rules that can help you anticipate and react to health events As you select a multitenant storage model consider how each approach might influence how you implement your system’s storage policies and alarms 11 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Tiered storage models AWS provides developers with a wide range of storage services each of which can be applied in combinations to address the varying cost and performance requirements of SaaS tenants The key here is not to artificially constrain your storage strategy to any one AWS service or storage technology As you profile your application’s storage needs take a more granular approach to matching the strengths of a given storage service with the specific requirements of the various components of your application DynamoDB for example might be a great fit for one application service while RDS might be a better fit for another If you use a microservice architecture for your solution where each service has its own view of storage think about which storage technology best fits each service’s profile It’s not uncommon to find a spectrum of different storage solutions in use across the set of microservices that make up your application This strategy also creates an opportunity to use storage as another way of tiering your SaaS solution Each tier could essentially leverage a separate storage strategy offering varying levels of performance and SLAs that would distinguish the value proposition of your solution’s tiers By using this approach you can better align the tenant tiers with the cost and load they are imposing on your infrastructure 12 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS The developer experience As a general architectural principle developers typically attempt to introduce layers or frameworks that centralize and abstract away horizontal aspects of their applications The goal here is to centralize and standardize policies and tenant resolution strategies You might for example introduce a data access layer that would inject tenant context into data access requests This would simplify development and limit a developer’s awareness of how tenant identity flows through the system Having this layer in place also provides you with more options for policies and strategies that might vary on a tenantbytenant basis It also creates a natural opportunity to centralize configuration and tracking of storage activity 13 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Linked account silo model Before digging into specifics of each storage service let’s look at how you can use AWS Linked Accounts to implement the silo model on top of any of the AWS storage solutions To achieve a silo with this approach your solution needs to provision a separate Linked Account for every tenant This can truly achieve a silo because the entire infrastructure for a tenant is completely isolated from other tenants The Linked Account approach relies on the Consolidated Billing feature that allows customers to associate child accounts with an overall payer account The idea here is that—even with separate linked accounts for each tenant—the billing for these tenants is still aggregated and presented as part of a single bill to the payer account The following figure shows a conceptual view of how Linked Accounts are used to implement the silo model Here you have two tenants with separate accounts each of which is associated with a payer account With this flavor of isolation you have the freedom to leverage any of the available AWS storage technologies to house your tenant’s data Silo model with linked accounts At first blush this can seem like a very appealing strategy for those SaaS providers that require a silo environment It certainly can simplify some aspects of management and migration of individual tenants Assembling a view of your tenant costs would also be more straightforward because you can summarize the AWS expenses at the Linked Account level Even with these advantages the Linked Account silo model has important limitations Provisioning for example is certainly more complex In addition to creating the tenant’s infrastructure you need to automate the creation of each Linked Account and adjust any limits that need it The larger challenge however is scale AWS has constraints on the number of Linked Accounts you can create and these limits aren’t likely to align with environments that will be creating a large number of new SaaS tenants 14 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on DynamoDB The nature of how data is scoped and managed by DynamoDB adds some new twists to how you approach multitenancy Although some storage services align nicely with the traditional data partitioning strategies DynamoDB has a slightly less direct mapping to the silo bridge and pool models With DynamoDB you have to consider some additional factors when selecting your multitenant strategy The sections that follow explore the AWS mechanisms that are commonly used to realize each of the multitenant partitioning schemes on DynamoDB Silo model Before looking at how you might implement the silo model on DynamoDB you must first consider how the service scopes and controls access to data Unlike RDS DynamoDB has no notion of a database instance Instead all tables created in DynamoDB are global to an account within a region That means every table name in that region must be unique for a given account Silo model with DynamoDB tables If you implement a silo model on DynamoDB you have to find some way to create a grouping of one or more tables that are associated with a specific tenant The approach must also create a secure controlled view of these tables to satisfy the security requirements of silo customers preventing any possibility of crosstenant data access The preceding figure shows one example of how you might achieve this tenantscoped grouping of tables Notice that two tables are created for each tenant (Account and Customer) These tables also have a tenant identifier that is prepended to the table names This addresses DynamoDB’s table naming requirements and creates the necessary binding between the tables and their associated tenants 15 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Bridge model Access to these tables is also achieved through the introduction of IAM policies Your provisioning process needs to automate the creation of a policy for each tenant and apply that policy to the tables owned by a given tenant This approach achieves the fundamental isolation goals of the silo model defining clear boundaries between each tenant’s data It also allows for tuning and optimization on a tenantbytenant basis You can tune two specific areas: •Amazon CloudWatch Metrics can be captured at the table level simplifying the aggregation of tenant metrics for storage activity • Table write and read capacity measured as input and output per second (IOPS) are applied at the table level allowing you to create distinct scaling policies for each tenant The disadvantages of this model tend to be more on the operational and management side Clearly with this approach your operational views of a tenant require some awareness of the tenant table naming scheme to filter and present information in a tenantcentric context The approach also adds a layer of indirection for any code that needs to interact with these tables Each interaction with a DynamoDB table requires you to insert the tenant context to map each request to the appropriate tenant table SaaS providers that adopt a microservicebased architecture also have another layer of considerations With microservices teams typically distribute storage responsibilities to individual services Each service is given the freedom to determine how it stores and manages data This can complicate your isolation story on DynamoDB requiring you to expand your population of tables to accommodate the needs of each service It also adds another dimension of scoping where each table for each service identifies its binding to a service To offset some of these challenges and better align with DynamoDB best practices consider having a single table for all of your tenant data This approach offers several efficiencies and simplifies the provisioning management and migration profile of your solution In most cases using separate DynamoDB tables and IAM policies to isolate your tenant data addresses the needs of your silo model Your only other option is to consider the Linked Account silo model (p 14) described earlier However as outlined previously the Linked Account isolation model comes with additional limitations and considerations Bridge model For DynamoDB the line between the bridge model and silo model is very blurry Essentially if your goal using the bridge model is to have a single account with oneoff schema variation for each client you can see how that can be achieved with the silo model described earlier For bridge the only question would be whether you might relax some of the isolation requirements described with the silo model You can achieve this by eliminating the introduction of any tablelevel IAM policies Assuming your tenants aren’t requiring full isolation you could argue that removing the IAM policies could simplify your provisioning scheme However even in bridge there are merits to the isolation So although dropping the IAM isolation might be appealing it’s still a good SaaS practice to leverage constructs and policies that can constrain crosstenant access Pool model Implementing the pool model on DynamoDB requires you to step back and consider how the service manages data As data is stored in DynamoDB the service must continually assess and partition the data to achieve scale And if the profile of your data is evenly distributed you could simply rely on this underlying partitioning scheme to optimize the performance and cost profile of your SaaS tenants 16 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model The challenge here is that data in a multitenant SaaS environment doesn’t typically have a uniform distribution SaaS tenants come in all shapes and sizes and as such their data is anything but uniform It’s very common for SaaS vendors to end up with a handful of tenants that consume the largest portion of their data footprint Knowing this you can see how it creates problems for implementing the pool model on top of DynamoDB If you simply map tenant identifiers to a DynamoDB partition key you’ll quickly discover that you also create partition “hot spots” Imagine having one very large tenant who would undermine how DynamoDB effectively partitions your data These hot spots can impact the cost and performance of your solution With the suboptimal distribution of your keys you need to increase IOPS to offset the impact of your hot partitions This need for higher IOPS translates directly into higher costs for your solution To solve this problem you have to introduce some mechanism to better control the distribution of your tenant data You’ll need an approach that doesn’t rely on a single tenant identifier to partition your data These factors all lead down a single path—you must create a secondary sharding model to associate each tenant with multiple partition keys Let’s look at one example of how you might bring such a solution to life First you need a separate table which we’ll call the “tenant lookup table” to capture and manage the mapping of tenants to their corresponding DynamoDB partition keys The following figure represents an example of how you might structure your tenant lookup table Introducing a tenant lookup table This table includes mappings for two tenants The items associated with these tenants have attributes that contain sharding information for each table that is associated with a tenant Here our tenants both have sharding information for their Customer and Account tables Also notice that for each tenanttable combination there are three pieces of information that represent the current sharding profile for a table These are: •ShardCount — An indication of how many shards are currently associated with the table •ShardSize — The current size of each of the shards •ShardId — A list of partition keys mapped to a tenant (for a table) With this mechanism in place you can control how data is distributed for each table The indirection of the lookup table gives you a way to dynamically adjust a tenant’s sharding scheme based on the amount of data it is storing Tenants with a particularly large data footprint will be given more shards Because the model configures sharding on a tablebytable basis you have much more granular control over 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model mapping a tenant’s data needs to a specific sharding configuration This allows you to better align your partitioning with the natural variations that often show up in your tenant’s data profile Although introducing a tenant lookup table provides you with a way to address tenant data distribution it does not come without a cost This model now introduces a level of indirection that you have to address in your solution’s data access layer Instead of using a tenant identifier to directly access your data first consult the shard mappings for that tenant and use the union of those identifiers to access your tenant data The following sample Customer table shows how data would be represented in this model Customer table with shard IDs In this example the ShardID is a direct mapping from the tenant lookup table That tenant lookup table included two separate lists of shard identifiers for the Customer table one for Tenant1 and one for 18 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Managing shard distribution Tenant2 These shard identifiers correlate directly to the values you see in this sample customer table Notice that the actual tenant identifier never appears in this Customer table Managing shard distribution The mechanics of this model aren’t particularly complex The problem gets more interesting when you think about how to implement a strategy that effectively distributes your data How do you detect when a tenant requires additional shards? Which metrics and criteria can you collect to automate this process? How do the characteristics of your data and domain influence your data profile? There is no single approach that universally resolves these questions for every solution Some SaaS organizations manually tune this based on their customer insights Others have more natural criteria that guide their approach The approach outlined here is one way you might choose to handle the distribution of your data Ultimately you’ll likely find a hybrid of the principles we describe that best aligns with the needs of your environment The key takeaway is that if you adopt the pool model be aware of how DynamoDB partitions data Moving in data blindly without considering how the data will be distributed will likely undermine the performance and cost profile of your SaaS solution Dynamically optimizing IOPS The IOPS needs of a SaaS environment can be challenging to manage The load tenants place on your system can vary significantly Setting the IOPS to some worst case maximum level undermines the desire to optimize costs based on actual load Instead consider implementing a dynamic model where the IOPS of your tables are adjusted in real time based on the load profile of your application Dynamic DynamoDB is one configurable opensource solution you can use to address this problem Supporting multiple environments As you think about the strategies outlined for DynamoDB consider how each of these models will be realized in the presence of multiple environments (QA development production etc) The need for multiple environments impacts how you further partition your experience to separate out each of your storage strategies on AWS With the bridge and pool models for example you can end up adding a qualifier to your table names to provide environment context This adds a bit of misdirection that you must factor into your provisioning and runtime resolution of table names Migration efficiencies The schemaless nature of DynamoDB offers real advantages for SaaS providers allowing you to apply updates to your application and migrate tenant data without introducing new tables or replication DynamoDB simplifies the process of migrating tenants between your SaaS versions and allows you to simultaneously host agile tenants on the latest version of your SaaS solution while allowing other tenants to continue using an earlier version Weighing the tradeoffs Each of the models has tradeoffs to consider as you determine which model best aligns with your business needs The silo pattern may seem appealing but the provisioning and management add a 19 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Weighing the tradeoffs dimension of complexity that undermines the agility of your solution Supporting separate environments and creating unique groups of tables will undoubtedly impact the complexity of your automated deployment The bridge represents a slight variation of the silo model on DynamoDB As such it mirrors most of what we find with the silo model The pool model on DynamoDB offers some significant advantages The consolidated footprint of the data simplifies the provisioning migration and management and monitoring experiences It also allows you to take a more multitenant approach to optimizing consumption and tenant experience by tuning the read and write IOPS on a crosstenant basis This allows you to react more broadly to performance issues and introduces opportunities to minimize cost These factors tend to make the pool model very appealing to SaaS organizations 20 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on RDS With so many early SaaS systems delivered on relational databases the developer community has established some common patterns for address multitenancy in these environments In fact RDS has a more natural mapping to the silo bridge and pool models The construct and representation of data in RDS is very much an extension of nonmanaged relational environments The basic mechanisms that are available in MySQL for example are also available to you in RDS This makes the realization of multitenancy on all of the RDS flavors relatively straightforward The following sections outline the various strategies that are commonly employed to realize the partitioning models on RDS Silo model You can achieve the silo pattern on AWS in multiple ways However the most common and simplest approach for achieving isolation is to create separate database instances for each tenant Through instances you can achieve a level of separation that typically satisfies the compliance needs of customers without the overhead of provisioning entirely separate accounts RDS instances as silos The preceding figure shows a basic silo model as it could be realized on top of RDS Here two separate instances are provisioned for each tenant The diagram depicts a master database and two read replicas for each tenant instance This is an optional concept to highlight how you can use this approach to set up and configure an optimized highly available strategy for each tenant 21 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Bridge model Bridge model Achieving the bridge model on RDS fits the same themes we see across all the storage models The basic approach is to leverage a single instance for all tenants while creating separate representations for each tenant within that database This introduces the need to have provisioning and runtime table resolution to map each table to a given tenant The bridge model offers you the opportunity to have tenants with different schemas and some flexibility when migrating tenant data You could for example have different tenants running different versions of the product at a given moment in time and gradually migrate schema changes on a tenantbytenant basis The following figure provides an example of one way you can implement the bridge model on RDS In this diagram you have a single RDS database instance that contains separate customer tables for Tenant1 and Tenant2 Example of a bridge model on RDS This example highlights the ability to have schema variation at the tenant level Tenant1’s schema has a Status column while that column is removed and replaced by the Gender column used by Tenant2 Another option here would be to introduce the notion of separate databases for each tenant within an instance The terminology varies for each flavor of RDS Some RDS storage containers refer to this as a database; others label it as a schema 22 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model RDS bridge with separate tables/schemas The preceding figure provides an illustration of this alternate bridge model Notice that we created databases for each of the tenants and the tenants then have their own collection of tables For some SaaS organizations this scopes the management of their tenant data more naturally avoiding the need to propagate the naming to individual tables This model is appealing but it may not be the best fit for all flavors of RDS Some RDS containers limit the number of databases/schemas that you can create for an instance The SQL Server container for example allows only 30 databases per instance which is likely unacceptable for most SaaS environments Although the bridge model allows for variation from tenant to tenant it’s important to know that typically you should still adopt policies that try to limit schema changes Each time you introduce a schema change you can take on the challenge of successfully migrating your SaaS tenants to the new model without absorbing any downtime So although this model simplifies those migrations it doesn’t promote oneoff tenant schemas or regular changes to the representation of your tenant’s data Pool model The pool model for RDS relies on traditional relational indexing schemes to partition tenant data As part of moving all the tenant data into a shared infrastructure model you store the tenant data in a single RDS instance and the tenants share common tables These tables are indexed with a unique tenant identifier that is used to access and manage each tenant’s data 23 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Factoring in single instance limits RDS pool model with shared schema The preceding figure provides an example of the pool model in action Here a single RDS instance with one Customer table holds data for all of the application’s tenants RDS is an RDBMS so all tenants must use the same schema version RDS is not like DynamoDB which has a flexible schema that allows each tenant to have a unique schema within a single table Factoring in single instance limits Many of the models we described concentrate heavily on storing data in a single instance and partitioning data within that instance Depending on the size and performance needs of your SaaS environment using a single instance might not fit the profile of your tenant data RDS has limits on the amount of data that can be stored in a single instance The following is a breakdown of the limits: • MySQL MariaDB Oracle PostgreSQL – 6 TB • SQL Server – 4 TB • Aurora – 64 TB In addition a single instance introduces resource contention issues (CPU memory I/O) In scenarios where a single instance is impractical the natural extension is to introduce a sharding scheme where your tenant data is distributed across multiple instances With this approach you start with a small collection of sharded instances Then continually observe the profile of your tenant data and expand the number of instances to ensure that no single instance reaches limits or becomes a bottleneck 24 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Weighing the tradeoffs Weighing the tradeoffs The tradeoffs of using RDS are fairly straightforward The primary theme is often more about trading management and provisioning complexity for agility Overall the pain points of provisioning automation are likely lower with the silo model on RDS However the cost and management efficiency associated with the pool model is often compelling This is especially significant as you think about how these models will align with your continuous delivery environment 25 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Silo model Multitenancy on Amazon Redshift Amazon Redshift introduces additional twists to factor into your multitenant thinking Amazon Redshift focuses on building highperformance clusters to house largescale data warehouses Amazon Redshift also places some limits on the constructs that you can create within each cluster Consider the following limits: • 60 databases per cluster • 256 schemas per database • 500 concurrent connections per database • 50 concurrent queries • Access to a cluster enables access to all databases in the cluster You can imagine how these limits influence the scale and performance that is delivered to Amazon Redshift You can also see how these limits can impact your approach to multitenancy with Amazon Redshift If you are targeting a modest tenant count these limits might have little influence on your solution However if you’re targeting a large number of tenants you’d need to factor these limits into your overall strategy The following sections highlight the strategies that are commonly used to realize each multitenant storage model on Amazon Redshift Silo model Achieving a true silo model isolation of tenants on Amazon Redshift requires you to provision separate clusters for each tenant Via clusters you can create the welldefined boundary between tenants that is commonly required to assure customers that their data is successfully isolated from crosstenant access This approach best leverages the natural security mechanisms in Amazon RedShift so you can control and restrict tenant access to a cluster using a combination of IAM policies and database privileges IAM controls overall cluster management and the database privileges are used to control access to data within the cluster The silo model gives you the opportunity to create a tuned experience for each tenant With Amazon Redshift you can configure the number and type of nodes in your cluster so that you can create environments that target the load profile of each individual tenant You can also use this as a strategy for optimizing costs The challenge of this model as we’ve seen with other silo models is that each tenant’s cluster must be provisioned as part of the onboarding process Automating this process and absorbing the extra time and overhead associated with the provisioning process adds a layer of complexity to your deployment footprint It also has some impact on the speed with which a new tenant can be allocated Bridge model The bridge model does not have a natural mapping on Amazon Redshift Technically you could create separate schemas for each tenant However you would likely run into issues with the Amazon Redshift limit of 256 schemas In environments with any significant number of tenants this simply doesn’t scale Security is also a challenge for Amazon Redshift in the bridge model When you are authorized as a 26 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Pool model user of an Amazon Redshift cluster you are granted access to all the databases within that cluster This pushes the responsibility for enforcing finergrained access controls to your SaaS application Given the motives for the bridge model and these technical considerations it seems impractical for most SaaS providers to consider using this approach on Amazon Redshift Even if the limits are manageable for your solution the isolation profile is likely unacceptable to your customers Ultimately the best answer is to simply use the silo model for any tenant that requires isolation Pool model Building the pool model on Amazon Redshift looks very much like the other storage models we’ve discussed The basic idea is to store data for all tenants in a single Amazon Redshift cluster with shared databases and tables In this approach the data for tenants is partitioned via the introduction of a column that represents a unique tenant identifier This approach gives most of the goodness that we saw with the other pool models Certainly the overall management monitoring and agility are improved by housing all of the tenant data in a single Amazon Redshift cluster The limit on concurrent connections is the area that adds a degree of difficulty to implementing the pool model on Amazon Redshift With an upper limit of 500 concurrent connections many multitenant SaaS environments can quickly exceed this limit This doesn’t eliminate the pool model from contention Instead it pushes more responsibility to the SaaS developer to put an effective strategy in place to manage how and when these connections are consumed and released There are some common ways to address connection management Developers often leverage client based caching to limit their need for actual connections to Amazon Redshift Connection pooling can also be applied in this model Developers need to select a strategy that ensures that the data access patterns of their application can be met effectively without exceeding the Amazon Redshift connection limit Adopting the pool model also means keeping your eye on the typical issues that come up any time you’re operating in a shared infrastructure The security of your data for example requires some application level policies to limit crosstenant access Also you likely need to continually tune and refine the performance of your environment to prevent any one tenant from degrading the experience of others 27 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Keeping an eye on agility The matrix of multitenant storage options can be daunting It can be challenging to identify the solution that represents the best mix of flexibility isolation and manageability Although it’s important to consider all the options it’s also essential to continually factor agility into your multitenant storage thinking The success of SaaS organizations is often heavily influenced by the amount of agility that is baked into their solution The storage technology and isolation model you select directly impacts your ability to easily deploy new features and functionality The shape of your structure and content of your data often change to support new features and this means your underlying storage model must accommodate these changes without requiring downtime Each isolation model has pros and cons when it comes to supporting this seamless migration As you consider your options give these factors the appropriate weight While the silo bridge and pool models all have an agility footprint you can apply common tenets to help you remain as nimble as possible A key tenet is the rather obvious but occasionally violated need to minimize oneoff variations for tenant data The silo and bridge models for example can lead to storage variations that can complicate your ability to push out new features to all of your SaaS customers as part of a single automated event Teams often use automation and continuous deployment to limit the amount of friction introduced by their multitenant storage strategy As you settle into a storage strategy expect and embrace the reality that your storage requirements continually evolve The needs of SaaS customers are a moving target and the storage model you pick today might not be a good fit tomorrow AWS also continues to introduce new features and services that can represent new opportunities to enhance your approach to storage 28 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Conclusion The storage needs of SaaS customers aren’t simple The reality of SaaS is that your business’s domain customers and legacy considerations affect how you determine which combination of multitenant storage options best meet the needs of your business Although there is no single strategy that universally fits every environment it is clear that some models do align better with the core tenets of the SaaS delivery model In general the poolbased approaches to storage—on any AWS storage technology—align well with the need for a unified approach to managing and operating a multitenant environment Having all your tenants in one shared repository and representation streamlines and unifies your approach’s operational and deployment footprint enabling crosstenant views of health and performance The silo and bridge models certainly have their place and for some SaaS providers are absolutely required The key here is that if you head down this path agility can get more complicated Some AWS storage technologies are better positioned to support isolated tenant storage schemes Building a silo model on RDS for example is less complex than it is on DynamoDB Generally whenever you rely on linked accounts as your partitioning model you will tackle more provisioning management and scaling challenges Beyond the mechanics of achieving multitenancy think about how the profile of each AWS storage technology can fit with the varying needs of your multitenant application’s functionality Consider how tenants will access the data and how the shape of that data will need to evolve to meet the needs of your tenants The more you can decompose your application into autonomous services the better positioned you are to pick and choose separate storage strategies for each service After exploring these services and portioning schemes you should have a much better sense of the patterns and inflection points that will guide your selection of a multitenant storage strategy AWS equips SaaS providers with a rich palette of services and constructs that can be combined to address any number of multitenant storage needs 29 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Contributors The following individuals and organizations contributed to this document: • Tod Golding Partner Solutions Architect AWS Partner Program • Clinton Ford Senior Product Marketing Manager DynamoDB • Zach Christopherson Database Engineer Amazon Redshift • Brian Welker Principal Product Owner RDS MySQL and MariaDB 30 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 31) Updated for latest technical accuracyMay 6 2020 Initial publication (p 31) Whitepaper published November 6 2016 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved 32 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers SaaS Storage Strategies Building a multitenant storage model on AWS AWS glossary For the latest AWS terminology see the AWS glossary in the AWS General Reference 33
|
General
|
consultant
|
Best Practices
|
SAP_HANA_on_AWS_Operations_Overview_Guide
|
SAP HANA on AWS Operations Overview Guide December 2017 The PDF version of the paper has been archived For the latest HTML version of the paper see: https://docsawsamazoncom/sap/latest/saphana/saphanaonawsoperationshtml Archived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Introduction 1 Administration 1 Starting and Stopping EC2 Instances Running SAP HANA Hosts 2 Tagging SAP Resources on AWS 2 Monitoring 4 Automation 4 Patching 5 Backup/Recovery 7 Creating an Image of an SAP HANA System 8 AWS Services and Components for Backup Solutions 9 Backup Destination 11 AWS Command Line Interface 12 Backup Example 13 Scheduling and Executing Backups Remotely 14 Restoring SAP HANA Backups and Snapshots 19 Networking 21 EBS Optimized Instances 22 Elastic Network Interfaces (ENIs) 22 Security Groups 23 Network Conf iguration for SAP HANA System Replication (HSR) 24 Configuration Steps for Logical Network Separation 25 SAP Support Access 26 Support Channel Setup with SAProuter on AWS 26 Support Channel Setup with SAProuter On Premises 28 Security 29 OS Hardening 29 Archived Disabling HANA Services 29 API Call Logging 29 Notifications on Access 30 High Availability and Disaster Recovery 30 Conclusion 30 Contributors 30 Appendix A – Configuring Linux to Recognize Ethernet Devices for Multiple ENIs 31 Notes 33 Archived Abstract Amazon Web Services (AWS) offers you the ability to run your SAP HANA systems of various sizes and operating systems Running SAP systems on AWS is very similar to running SAP systems in your data center To a SAP Basis or NetWeaver administrator there are minimal differences between the two environments There are a number of AWS Cloud considerations relating to security storage compute configurations management and monitoring that will help you get the most out of your SAP HANA implementatio n on AWS This whitepaper provides the best practices for deployment operations and management of SAP HANA systems on AWS The target audience for this whitepaper is SAP Basis and NetWeaver administrators who have experience running SAP HANA systems in an onpremises environment and want to run their SAP HANA systems on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 1 Introduction This guide provides best practice s for operating SAP HANA systems that have been deployed on Amazon Web Services (AWS) either using the SAP HANA Quick Start reference deployment process1 or manually following the instructions in Setting up AWS Resources and the SLES Operating System for SAP HANA Installation 2 This guide is not intended to replace any of the standard SAP documentation See the following SAP guides and notes: o SAP Library (helpsapcom) SAP HANA Administration Guide3 o SAP installation gui des4 (These require SAP Support Portal access ) o SAP notes5 (These require SAP Support Portal access ) This guide assumes that you have a basic kno wledge of AWS If you are new to AWS read the following guides before continuing with this guide: o Getting Started with AWS6 o What is Amazon EC2?7 In addition the following SAP on AWS guides can be found here:8 o SAP on AWS Implementation and Operations Guide provides best practices for achieving optimal performance availability and reliability and lower total cost of ownership (TCO) while running SAP solutions on AWS9 o SAP on AWS High Availability Guide explains how to configure SAP systems on Amazon Elastic Compute Cloud (Amaz on EC2 ) to protect your application from various single points of failure10 o SAP on AWS Backup and Recovery Guide explains how to back up SAP systems running on AWS in contrast to backing up SAP systems on traditional infrastructure11 Administration This section provides guidance on common administrative tasks required to operate an SAP HANA system including information about starting stopping and cloning systems ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 2 Start ing and Stopping EC2 Instances Running SAP HANA Hosts At any time you can stop one or multiple SAP HANA h osts Before stopping the EC2 instance of an SAP HANA host first stop SAP HANA on that instance When you resume the instance it will automatically start with the same IP address network and storage configuration as before You also have the option of using the EC2 Scheduler to schedule starts and stops of your EC2 instances12 The EC2 Scheduler relies on the native shutdown and start up mechanisms of the operating sy stem These native mechanisms will invoke the orderly shutdown and startup of your SAP HANA instance Here is an architectural diagram of how the EC2 S cheduler work s: Figure 1: EC2 Scheduler Tagging SAP Resources on AWS Tagging your SAP resources on AWS can significantly simplify identification security manageability and billing of those resources You can tag your resources using the AWS Management C onsole or by using the createtags functionality of the AWS Command Line Interface (AWS CLI ) This table lists some example tag name s and tag values : Tag Name Tag Value Name SAP server’s virtual (host) name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 3 Tag Name Tag Value Environment SAP server’s landscape role such as: SBX DEV QAT STG PRD etc Application SAP solution or product such as: ECC CRM BW PI SCM SRM EP etc Owner SAP point of contact Service Level Know n uptime and downtime schedule After you have tagged your resources you can then apply specific security restrictions to them for example access control based on the tag values Here is an example of such a policy from our AWS blog :13 { "Version" : "2012 1017" "Statement" : [ { "Sid" : "LaunchEC2Instances" "Effect" : "Allow" "Action" : [ "ec2:Describe*" "ec2:RunInstances" ] "Resource" : [ "*" ] } { "Sid" : "AllowActionsIfYouAreTheOwner" "Effect" : "Allow" "Action" : [ "ec2:StopInstances" "ec2:StartInstances" "ec2:RebootInstances" "ec2:TerminateInstances" ] "Condition" : { "StringEquals" : { "ec2:ResourceTag/PrincipalId" : "${aws:userid}" } } ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 4 "Resource" : [ "*" ] } ] } The AWS Identity and Access Management ( IAM ) policy only allows specific permissions based on the tag value In this scenario the current user ID must match the tag value in order to be granted permissions For more information on tagging refer to our AWS documentation and our AWS blog 14 15 Monitoring There are various AWS SAP and third party solutions that you can leverage for monitoring your SAP workloads Here are some of the core AWS monitoring services: • Amazon CloudWatch – CloudWatch is a monitoring service for AWS resources16 It’s critical for SAP workloads where it’s used to collect resource utilization logs and create alarms to automatically react to changes in AWS resources • AWS CloudTrail – CloudTrail keeps track of all API calls made within your AWS account It captures key metrics about the API calls and can be useful for automating trail creation for your SAP resources Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting AWS and SAP support You can use native AWS monitoring services in a compl ement ary fashion with the SAP Solution Manager Third party monitoring tools can be found on AWS Marketplace 17 Automation AWS offers multiple options for programmatically scripting your resources to operate or scale them in a predictable and repeatable manner You can leverage AWS CloudFormation to aut omate and operate SAP systems on AWS Here are some examples for automating your SAP environment on AWS: ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 5 Area Activities AWS Services Infrastructure Deployment Provision new SAP environment SAP system cloning AWS CloudFormation18 AWS CLI19 Capacity Management Automate scaleup/scaleout of SAP application servers AWS Lambda 20 AWS Cloud Formation Operations SAP b ackup automation (see the Backup Example ) Perform ing monitor ing and visualization Amazon CloudWatch Amazon EC2 System s Manager Patching There are two ways for you to patch your SAP HANA database with alternative s for minimizing cost and/or downtime With AWS y ou can provision additional servers as needed to minimize downtime for patching in a cost effective manner You can also minimize risks by creating on demand copies of your existing production SAP HANA databases for life like production readiness testing This table summarizes the tradeoffs of the two patching methods : Patching Method Benefits Technologies Available Patch an existing server [x] Patch existing OS and DB [x] Longest downtime to existing server and DB [] No costs for additional on demand instances [] Lowest levels of relative complexity and setup tasks involved Native OS patching tools Patch Manager21 Native SAP HANA patching tools22 Provision and patch a new server [] Leverage latest AMIs (only DB patch needed) [] Shortest downtime to existing server and DB [] Can patch and test OS and DB separately and together [x] More costs for additional on demand instances [x] More complexity and setup tasks involved Amazon Machine Image (AMI) 23 AWS CLI24 AWS Cloud Formation25 SAP HANA System Replication26 SAP HANA System Cloning27 SAP HANA backups28 SAP Notes : 198488229 Using HANA System Replication for Hardware Exchange with minimum/zero downtime ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 6 Patching Method Benefits Technologies Available 191330230 HANA: Suspend DB connections for short maintenance tasks The first method (patch an existing server) involves patching the operating system (OS) and database (DB) components of your SAP HANA server The goal of the method is to minimize any additional server costs and avoid any tasks needed to set up additional systems or tests This method may be most appropriate if you have a well defined patching process and are satisfied with your current downtime and costs With this method you must use the correct OS update process and too ls for your Linux distribution S ee this SUSE blog31 and Red Hat FAQ page32 or check each vendor’s documentation for their specific processes and procedures In addition to patching tools provided by our Linux partners AWS offers a free of charge patching service33 called Patch Manager 34 At th e time of this writing Patch Manager support s Red Hat 35 Patch Manager is an automated tool that helps you simplify your OS patching process You can scan your EC2 instances for missing patches and automatically install them select the timing for patch rollouts control instance reboots and many other tasks You can also define auto approval rules for patches with an added ability to black list or white list specific patches control how the patches are deployed on the target instances (eg stop services before applying the patch) and schedule the automatic rollout through maintenance windows The second method (provision and patch a new server) involves provisioning a new EC2 instance that will receive a copy of your source system and database The goal of the method is to minimize downtime minimize risks (by having production data and executing production like testing) and hav e repeatable proc esses This method may be most appropriate if you are looking for higher degrees of automation to enable these goals and are comfortable with the trade offs This method is more complex and has a many more options to fit your requirements Certain options are not exclusive and can be used together For example your AWS CloudFormation template can include the latest Amazon Machine Images ( AMIs ) which you can then use to automate the provisioning set up and configuration of a new SAP HANA server ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 7 Here is an ex ample of a process that can be used to automate OS/HANA patching /upgrade : 1 Download the AWS CloudFormation template offered in the SAP HANA Quick Start 36 2 Update the CloudFormation template with the latest OS AMI ID and execute the updated template to provision a new SAP HANA server The latest OS AMI ID has the specific security patches that your organization needs As part of the provisioning process you need to pro vide the latest SAP HANA installation binaries to get to the required version This allow s you to provision on a new HANA system with the required OS version and security patches along with SAP HANA software versions 3 After the new SAP HANA system is available use one of the following methods to copy the data from the original SAP HANA instance to the newly created system : o SAP HANA native backup/restore o Use SAP HANA System Replication (HSR) technology to replicate the data and then perform an HSR take over o Take snapshots of the old system’s Amazon Elastic Block Store (Amazon EBS ) volumes and create new EBS volumes from it Mount them in the new environment (M ake sure that the HANA SID stays the same for minimal post processing ) o Use new SAP HANA 20 functionality such as SAP HANA Cloning 37 The new system will become a clone of the original system At the end of this process you will have a new SAP HANA system that is ready to test SAP Note 198488238 (Using HANA System Replication for Hardware Exchange with Minimum/Z ero Downtime ) has specifi c recommendations and guidelines on the process for promoting to production Backup and Recovery This section provides an overview of the AWS services used in the backup and recovery of SAP HANA systems and provides an example backup and recovery scenario This guide does not include detailed instructions on how to execute ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 8 database backups using native HANA backup and recovery features or third party backup tools Please refer to the standard OS SAP and SAP HANA documentation or the documentation provided by backup software vendor s In addition backup schedules frequency and retention periods m ight vary with your system type and business requirements See the following standard SAP documentation f or guidance on these topics (SAP notes require SAP Support Portal access ) Note : Both general and advanced backup and recovery concepts for SAP systems on AWS can be found in detail in the SAP on AWS Backup and Recovery Guide 39 SAP Note Description 164214840 FAQ: SAP HANA Database Backup & Recovery 182120741 Determining required recovery files 186911942 Checking backups using hdbbackupcheck 187324743 Checking recoverability with hdbbackupdiag check 165105544 Scheduling SAP HANA Database Backups in Linux 248417745 Sche duling backups for multi tenant SAP HANA Cockpit 20 Creating an Image of an SAP HANA System You can use the AWS Management Console or the command line to create your own AMI based on an existing instance46 For more information see the AWS documentation 47 You can use an AMI of your SAP HANA instance for the following purposes: o To c reate a full offline system backup (of the OS / usr/sap HANA shared backup data and log files ) – AMIs are automatically saved in multiple Availability Zones within the same Region o To move a HANA system from one R egion to another – You can create an image of an existing EC2 instance and move it to another Region by following the instructions in the AWS documentation 48 Once the AMI has been copied to the target R egion the new instance can be launched there ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 9 o To c lone an SAP HANA system – You can creat e an AMI of an existing SAP HANA system to create an exact clone of the system See the following section for additional information Note – See the restore section later in this whitepaper to view the recommended restore steps for production environments Tip: The SAP HANA system should be in a consistent state before you creat e an AMI To do this stop the SAP HANA instance before creating the AMI or by following the instructions in SAP Note 1703435 (requires SAP Support Portal access) 49 AWS Services and Components for Backup Solutions AWS provides a number of services and options for storage and backup including Amazon Simple Storage Service ( Amazon S3) AWS Identity and Access Management (IAM) and Amazon Glacier Amazon S3 Amazon S3 is the center of any SAP backup and recovery solution on AWS50 It provides a highly durable storage infrastructure designed for mission critical and primary data storage It is designed to provide 99999999999% durability and 9999% availability over a given year See the Amazon S3 documentation for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup files51 AWS IAM With IAM you can securely control access to AWS services and resources for your users52 You can create and manage AWS users and groups and use permissions to grant user access to AWS resources You can create roles in IAM and manage permissions to control which operations can be performed by the entity or AWS service that assumes the role You can also define which entity is allowed to assume the role During the deployment process CloudFormation creates a n IAM role that allow s access to get objects from and/or put objects in to Amazon S3 That role is ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 10 subsequently assigned to each EC2 instance that is hosting SAP HANA master and worker nodes at launch time as they are deployed Figure 2 : IAM r ole example To ensure security that applies the principle of least privilege permissions for this role are limited only to actions that are required for backup and recovery {"Statement":[ {"Resource":"arn:aws:s3::: <yours3bucketname>/*" "Action":["s3:GetObject""s3:PutObject""s3:DeleteObject" "s3:ListBucket""s3:Get*""s3:List*"] "Effect":"Allow"} {"Resource":"*""Action":["s3:List*""ec2:Describe*""ec2:Attach NetworkInterface" "ec2:AttachVolume""ec2:CreateTags""ec2:CreateVolume""ec2:RunI nstances" "ec2:StartInstances"]"Effect":"Allow"}]} To add functions later you can use the AWS Management Console to modify the IAM role Amazon Glacier Amazon Glacier is an extremely low cost service that provides secure and durable storage for data archiving and backup53 Amazon Glacier is optimized for data that is infrequently accessed and provides multiple options like expedited standard and bulk methods for data retrieval With standard and bulk retrievals data is available in 3 5 hours or 5 12 hours respectively ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 11 However with expedited retrieval Amazon Glacier provides you with an option to retrieve data in 3 5 minutes which can be ideal for occasional urgen t requests With Amazon Glacier you can reliably store large or small amounts of data for as little as $001 per gigabyte per month a significant savings compared to on premises solutions You can use lifecycle policies as explained in the Amazon S3 Developer Guide to push SAP HANA backups to Amazon Glacier for long term archiv ing54 Backup Destination The primary difference between backing up SAP systems on AWS compared with traditional on premises infrastructure is the backup destination Tape is the typical backup destination used with on premises infrastructure On AWS backups are stored in Amazon S3 Amazon S3 has many benefits over tape including the ability to automatically store b ackups “offsite” from the source system since data in Amazon S3 is replicated across multiple facilities within the AWS R egion SAP HANA systems provisioned using the SAP HANA Quick Start reference deploy ment are configured with a set of EBS volumes to be used as an initial local backup destination HANA backups are first stored on these local EBS volumes and then copied to Amazon S3 for long term storage You can use SAP HANA S tudio SQL commands or the DBA Cockpit to start or schedule SAP HANA d ata backups L og backups are written automatically unless disabled The /backup file system is configured as part of the deployment process Figure 3 : SAP HANA file system l ayout ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 12 The SAP HANA globalini configuration file has been customized by the SAP HANA Quick Start reference deployment process as follows : database backups go directly to /backup/data/<SID> while automatic log archival files go to /backup/log/<SID> [persistence] basepath_shared = no savepoint_intervals = 300 basepath_datavolumes = /hana/data/<SID> basepath_logvolumes = /hana/log/<SID> basepath_databackup = /backup/data/<SID> basepath_logbackup = /backup/log/<SID> Some third party backup tools like Commvault NetBackup and TSM are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups directly into Amazon S3 without needing to store th e backups on EBS volumes first AWS Command Line I nterface The AWS CLI which is a unified tool to manage AWS services is instal led as part of the base image55 Using various commands you can control multiple AWS services from the command line directly and aut omate t hem through scripts Access to your S3 bucket is available through the IAM role assigned to the instance (discussed earlier ) Using the AWS CLI commands for A mazon S3 you can list the contents of the previously created bucket back up files and restore files as explained in the AWS CLI documentation56 imdbmaster:/backup # aws s3 ls region=us east1 s3://node2 hanas3bucket gcynh5v2nqs3 Bucket: node2 hanas3bucket gcynh5v2nqs3 Prefix: LastWriteTime Length Name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 13 Backup Example Here are the steps you might take for a typical backup task: 1 In the SAP HANA Backup E ditor choose Open Backup Wizard You can also open the B ackup Wizard by r ightclicking the system that you want to back up and choo sing Back Up a Select destination type File This will back up the database to files in the specified file system b Specify the backup destination ( /backup/data/<SID>) and the backup prefix Figure 4 : SAP HANA backup example c Choose Next and then Finish A confirmation message will appear when the backup is complete d Verify that the backup files are available at the OS level The next step is to push or synchronize the backup files from the /backup file system to Amazon S3 by using the aws s3 sync command57 imdbmaster:/ # aws s3 sync backup s3://node2 hanas3bucket gcynh5v2nqs3 region=us east1 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 14 2 Use the AWS Management Console to v erify that the files have been pushed to Amazon S3 You can also use the aws s3 ls comma nd shown previously in the AWS Command Line Interface section 58 Figure 5 : Amazon S3 bucket contents after backup Tip: The aws s3 sync command will only upload new files that don’t exist in Amazon S3 Use a periodic ally scheduled cron job to sync and then delete files that have been uploaded See SAP Note 1651055 for scheduling periodic backup jobs in Linux and extend the supplied scripts with aws s3 sync commands59 Scheduling and Executing Backups Remotely The Amazon EC2 System s Manager Run Command along with Amazon CloudWatch Events can be leveraged to schedule backups for your HANA SAP system remotely with the need to log in to the EC2 instances You can also leverage cron or any other instance level scheduling mechanism The Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances A managed instance is any EC2 instance or on premises machine in your hybrid environment that has been configured for Systems Manager The Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 15 scale You can use the Run Command from the Amazon EC2 console the AWS CLI Windows PowerShell or the AWS SDKs Systems Manager Prerequisites Systems Manager has the following prerequisites Supported Operating System (Linux) Instances must run a supported version of Linux 64bi t and 32b it systems: • Amazon Linux 201409 201403 or later • Ubuntu Server 1604 LTS 1404 LTS or 1204 LTS • Red Hat Enterprise Linux (RHEL) 65 or later • CentOS 63 or later 64bit systems only: • Amazon Linux 201509 201503 or later • Red Hat Enterprise Linux (RHEL) 7x or later • CentOS 71 or later • SUSE Linux Enterprise Server (SLES) 12 or higher Roles for Systems Manager Systems Manager requires an IAM role for instances that will process commands and a separate role for users executing commands Both roles require permission policies that enable them to communicate with the Systems Manager API You can choose to use Systems Manager managed policies or you can create your own roles and specify permissions For more information see Configuring Security Roles for Systems M anager 60 If you are configuring on premises servers or virtual machines ( VMs) that you want to configure using Systems Manager you must also configure an IAM service role For more information see Create an IAM Service Role 61 SSM Agent (EC2 Linux instances) SSM Agent processes Systems Manager requests and configures your machine as specified in the request You must download and install SSM Agent to your EC2 Linux instances For more information see Installing SSM Agent on Linux To schedule remote backups here are the high level steps: 1 Install and configure the Systems Manager agent on the EC2 instance For detailed installation steps please see http://docsawsamazoncom/systems manager/latest/userguide/ssm agenthtml#sysman install ssmagent ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 16 2 Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance For detailed info rmation on how to assign SSM access to a role please see http://docsawsamazoncom/sy stems manager/latest/userguide/systems manager accesshtml 3 Create an SAP HANA backup script A sample script is shown below You can use this as a starting point and then modify it to meet your requirement s #!/bin/sh set x S3Bucket_Name=<<Name of the S3 bucket where backup files will be copied>> TIMESTAMP=$(date +\ %F\_%H\%M) exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout 2>&1 echo "Starting to take backup of Hana Database and Upload the backup files to S3" echo "Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP" BACKUP_PREFIX=${SAPSYSTEMNAME}_${TIMESTAMP} echo $BACKUP_PREFIX # source HANA environment source $DIR_INSTANCE/hdbenvsh # execute command with user key hdbsql U BACKUP "backup data using file ('$BACKUP_PREFIX')" echo "HANA Backup is completed" echo "Continue with copying the backup files in to S3" echo $BACKUP_PREFIX sudo u root /usr/local/bin/aws s3 cp recursive /backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ exclude "*" in clude "${BACKUP_PREFIX}*" echo "Copying HANA Database log files in to S3" sudo u root /usr/local/bin/aws s3 sync /backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ exclude "*" include "log_backup*" sudo u root /usr/local/bin/aws s3 cp /backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME} ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 17 Note : This script takes into consideration that hdbuserstore has a key named Backup 4 At this point you can test an one time backup by executing an ssm command directly : aws ssm send command instance ids <<HANA Master Instance ID>> document name AWS RunShellScript parameters commands="sudo u <HANA_SID>adm TIMESTAMP=$(date +\ %F\_%H\%M) SAPSYSTEMNAME=<HANA_SID> DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 i /usr/sap/HDB/HDB00/hana_backupsh" Note : For this command to execute successfully you will have to enable <sid>adm login using sudo 5 Using CloudWatch E vents you can schedule backups remotely at any desired frequency Navigate to the Cloud Watch Events page and create a rule ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 18 Figure 6 : Amazon CloudWatch event rule creation When configuring the rule : • Choose Schedule • Select SSM Run Command as the Target • Select AWS RunShellScript (Linux) as the D ocument type • Choose InstanceIds or Tags as Target Keys ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 19 • Choose Constant under Configure Parameters and type the run command Restoring SAP HANA Backups and Snapshots Restor ing SAP Backups To restore your SAP HANA database from a backup perform the following steps : 1 If the backup files are not already available in the /backup file system but are in Amazon S3 restore the files from Amazon S3 by using the aws s3 cp command62 This command has the following syntax: aws region <region> cp <s3 bucket/path> recursive <backup prefix>* For e xample : imdbmaster:/backup/data/YYZ # aws region us east1 s3 cp s3://node2 hanas3bucket gcynh5v2nqs3/data/YYZ recursive include COMPLETE* 2 Recover the SAP HANA database by using the R ecovery Wizard as outlined in the SAP HANA Administration Guide 63 Specify File as the Destination Type and enter the correct B ackup Prefix Figure 7 : Restore example ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 20 3 When the recovery is complete you can resume normal operation s and clean up backup files from the /backup/<SID>/* directories Restor ing EBS/AMI Snapshots To r estore EBS snapshots perform the following steps: 1 Create a new volume from the snapshot: aws ec2 create volume region us west2 availability zone us west2a snapshot id snap 1234abc123a12345a volume type gp2 2 Attach the newly created volume to your EC2 host: aws ec2 attach volume region=us west2 volume id vol 4567c123e45678dd9 instance id i03add123456789012 device /dev/sdf 3 Mount the logical volume associated with SAP HANA data on the host: mount /dev/sdf /hana/data 4 Start your SAP HANA instance Note: For large mission critical systems we highly recommend that you execute the volume initialization command on the database data and log volumes after the AMI restore but before starting the database Executing the volume initialization command will help you avoid extensive wait times before the database is available Here is the sample fio command that you can use : sudo fio – filename=/dev/xvdf –rw=read –bs=128K –iodepth=32 – ioengine=libaiodirect=1 –name=volume initialize For m ore information about initializing Amazon EBS volumes see the AWS documentation 64 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 21 Restoring AMI Snapshots You can restore your HAN SAP AMI snapshots through the AWS Management Console On the EC2 Dashboard select AMI s in the left hand navigation Choose the AMI that you want to restore expand Actions and select Launch Figure 8 : Restor e AMI snapshot Networking SAP HANA components communicate over the following logical network zones: • Client zone – t o communicate with different clients such as SQL clients SAP Application Server SAP HANA Extended Application Services ( XS) SAP HAN A Studio etc • Internal zone – t o communicate with hosts in a distributed SAP HANA system as well as for SAP HSR • Storage zone – t o persist SAP HANA data in the storage infrastructure for resumption after start or recovery after failure Separating network zones for SAP HANA is considered both an AWS and an SAP best practice because it enables you to isolate the traffic required for each communication channe l ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 22 In a traditional bare metal setup these different network zones are set up by having multiple physical network cards or virtual LANs ( VLANs ) Conversely on the AWS Cloud this network isolation can be achieved simply through the use of elastic networ k inter faces (ENI s) combined with s ecurity groups Amazon EBS optimized instances can also be used for further i solation for storage I/O EBSOptimized Instances Many newer Amazon EC2 instance types such as the X1 use an optimized configuration stack and provide additional dedicated capacity for Amazon EBS I/O These are called EBS optimized instances 65 This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance Figure 9 : EBS optimized instances Elastic Network Interfaces (ENI s) An ENI is a virtual network interface that you can attach to an EC2 instance in an Amazon Virtual Private Cloud (Amazon VPC) With ENI s you can create different logical network s by specifying multiple private IP addresses for your instances For more information about ENIs see the AWS documentation 66 In the following example two ENIs are attached to each SAP HANA node as well as in separate communication channel for storage ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 23 Figure 10 : ENIs a ttached to SAP HANA nodes Security Groups A security group acts as a virtual firewall that controls the traffic for one or more instances When you launch an instance you associate one or more security groups with the instance You add rules to each security group that allow traffic to or from its associated instances Y ou can modify the rules for a security group at any time The new rules are automatically applied to all instances that are associated with the security group To learn more about security groups see the AWS documentation 67 In the following example EN I1 of each instance shown is a member of the same security group that controls inbound and outbound network traffic for the client network ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 24 Figure 11: ENIs and se curity groups Network Configuration for SAP H ANA System Replication (HSR) You can configure a dditional ENIs and security groups to further isolate inter node communication as well as SAP HSR network traffic In Figure 10 ENI 2 is dedicated for inter node communication with its own security group (not shown) to secure client traffic from inter node communication ENI 3 is configured to secure SAP HSR traffic to another A vailability Zone within the same Region In this exam ple the target SAP HANA cluster would be configured with additional ENIs similar to the source environment and ENI 3 would share a common security group ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 25 Figure 12 : Further isolation with a dditional ENIs and s ecurity groups Configuration Steps for L ogical Network Separation To configure your logical network for SAP HANA follow these steps : 1 Create new security groups to allow for isolation of client internal communication and if applicable SAP HSR network traffic See Ports and Connections in the SAP HANA documentation to learn about the list of ports used for different network zones68 For more information about how to create and configure security groups see the AWS documentation 69 2 Use Secure Shell ( SSH ) to connect to your EC2 instance at the OS level Follow the steps described in Appendix A to configure the OS to properly recognize and name the Ethernet devices associated with the new elastic network interfaces (ENIs ) you will be creating 3 Create new ENI s from the AWS M anage ment Console or through the AWS CLI Make sure that the new ENIs are created in the subnet where your SAP HANA instance is deployed As you create each new ENI associate it with the appropriate security group you created in step 1 For more information ab out how to create a new ENI see the AWS documentation 70 4 Attach the ENIs you created to your EC2 instance where SAP HANA is installed For more information about how to attach an ENI to an EC2 instance see the AWS documentation71 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 26 5 Create virtual host names and map them to the IP addresses associated with client internal and replication network interfaces Ensure that host nam etoIPaddress resolution is working by creating entries in all applicable host files or in the Domain Name System (DNS) When complete test that the virtual host names can be resolved from all SAP HANA nodes clients etc 6 For scale out deployments configure SAP HANA i nter service communication to let SAP HANA communicate over the internal network To learn more about this step s ee Configuring SAP HANA Inter Service Communication in the SAP HANA documentation72 7 Configure SAP HANA hostname resolution to let SAP HANA communicate over the replication network for SAP HSR To learn more about this step s ee Configuring Hostname Resolution for SAP HANA System Replication in the SAP HANA documentation 73 SAP Support Access In some situations it may be necessary to allow an SAP support engineer to access your SAP HANA s ystems on AWS The following information serves only as a supplement to the information contained in the “Getting Support” section of the SAP HANA Administration Guide 74 A few steps are required to configure proper connectivity to SAP These steps differ depending on whether you w ant to use an existing remote network connection to SAP or you are setting up a new connection directly with SAP from systems on AWS Support Channel Setup with SAProuter on AWS When setting up a direct support connection to SAP from AWS consider the following steps: 1 For the SAProuter instance c reate and configure a specific SAProuter security group which only allows the required inbound and outbound access to the SAP s upport network This should be limited to a specific IP address that SAP gives you to connect to along with TCP port 3299 See the Amazon EC2 security group documentation for additional details about creating and configuring s ecurity groups75 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 27 2 Launch t he instance that the SAProuter software will be installed on into a public subnet of the Amazon VPC and assign it an Elastic IP a ddress (EIP) 3 Install the SAProuter software and create a saprouttab file that allows access from SAP to your SAP HANA system on AWS 4 Set up the connection with SAP For your internet connection use Secure Network Communication (SNC) For more information see the SAP Remote Support – Help page76 5 Modify the ex isting SAP HANA security groups to trust the new SAProuter security group you have created Tip: For added security shut down the EC2 instance that hosts the SAProuter service when it is not needed for support purposes Figure 13 : Support connectivity with SAProuter on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 28 Support Channel Setup with SAProuter OnPremises In many cases you may already have a support connection configured between your data center and SAP This can easily be extended to support SAP systems on AWS This scenario assumes that connectivity between your data center and AWS has already been established either by way of a secure VPN tunnel over the internet or by using AWS Direct Connect 77 You can extend this connectivity as follows : 1 Ensure that the proper saprouttab entries exist to allow access from SAP to resources in the Amazon VPC 2 Modify the SAP HANA s ecurity groups to allow access from the on premises SAProuter IP address 3 Ensure that the proper firewall ports are o pen on your gateway to allow traffic to pass over TCP port 3299 Figure 14 : Support connectivity with SAProuter onp remises ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 29 Security This section discusses additional security topics you may want to consider that are not covered in the SAP HANA Quick Start reference deployment guide Here are additional AWS security resources to help you achieve the level of security you require for your SAP HANA environment on AWS: • AWS Cloud Security C enter78 • CIS AWS Foundation whitepaper79 • AWS Cloud Security whitepaper80 • AWS Cloud Security Best Practices whitepaper81 OS Hardening You may want to lock down the OS configurat ion further for example to avoid providing a DB admin istrator with root credentials when logging into an instance You can also refer to the followin g SAP notes: • 1730999 : Configuration changes in HANA appliance82 • 1731000 : Unrecommended configuration changes83 Disabling HANA Services HANA services such as HANA XS are optional and should be deactivated i f they are not needed For instructions see SAP N ote 1697613 : Remove XS Engine out of SAP HANA d atabase 84 In case of service deactivation you should also remove the TCP ports from the SAP HANA AW S security groups for complete security API C all Logging AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you85 The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 30 With CloudTrail you can get a history of A WS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as CloudFormation) The AWS API call history produced by CloudTrail enables security analysis resourc e change tracking and compliance auditing Notifications on Access You can use Amazon Simple Notification Service ( Amazon SNS) or third party applications to set up n otifications on SSH l ogin to your email addre ss or mobile phone86 High Availability and Disaster Recovery For details and best practices for h igh availability and disaster recovery of SAP HANA systems running on AWS see High Availability and Disaster Recovery Options for SAP HANA on AWS 87 Conclusion This whitepaper discusse s best practices for the operation of SAP HANA systems on the AWS cloud The best practices provided in this paper will help you efficiently manage and achieve maximum benefit s from running your SAP HANA systems on the AWS C loud For feedback or questions please contact us at saponaws@amazoncom Contributors The following individuals and organizations contributed to this document: • Rahul Kabra Partner Solutions Architect AWS • Somckit Khemmanivanh Partner Solution s Architect AWS • Naresh Pasumarthy Partner Solutions Architect AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 31 Appendix A – Configuring Linux to Recognize Ethernet Devices for M ultiple ENIs Follow these steps to configure the Linux operating system to recognize and name the Ethernet devices associated with the new elastic network interfaces (ENI s) created for logical network separation which was discussed earlier in this paper 1 Use SSH to connect to your SAP HANA host as ec2user and sudo to root 2 Remove the existing udev rule ; for example : hanamaster:# rm f /etc/udev/rulesd/70 persistent netrules Create a new udev rule that writes rules based on MAC address rather than other device attributes This will ensur e that on reboot eth0 is still eth0 eth1 is eth1 and so on For example: hanamaster:# cat <<EOF > /etc/udev/rulesd/75 persistent net generatorrules # Copyright (C) 2012 Amazoncom Inc or its affiliates # All Rights Reserved # # Licensed under the Apache License Version 20 (the "License") # You may not use this file except in compliance with the License # A copy of the License is located at # # http://awsamazoncom/apache20/ # # or in the "license" file accompanying this file This file is # distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND either express or implied See the License for the ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 32 # specific language governing permissions and limitations under the # License # these rules generate rules for persistent network device naming SUBSYSTEM!="net" GOTO="persistent_net_generator_end" KERNEL!="eth*" GOTO="persistent_net_generator_end" ACTION!="add" GOTO="persistent_net_generator_end" NAME=="?*" GOTO="persistent_net_generator_end" # do not create rule for eth0 ENV{INTERFACE}=="eth0" GOTO="persistent_net_generator_end" # read MAC address ENV{MATCHADDR}="\ $attr{address}" # do not use empty address ENV{MATCHADDR}=="00:00:00:00:00:00" GOTO="persistent_net_generator_end" # discard any interface name not generated by our rules ENV{INTERFACE_NAME}=="?*" ENV{INTERFACE_NAME}="" # default comment ENV{COMMENT}="elastic network interface" # write rule IMPORT{program}="write_net_rules" # rename interface if needed ENV{INTERFACE_NEW}=="?*" NAME="\ $env{INTERFACE_NEW}" LABEL="persistent_net_generator_end" EOF 3 Ensure proper interface properties For example: hanamaster:# cd /etc/sysconfig/network/ hanamaster:# cat <<EOF > /etc/sysconfig/network/ifcfg ethN BOOTPROTO='dhcp4' MTU="9000" REMOTE_IPADDR='' STARTMODE='onboot' LINK_REQUIRED=no LNIK_READY_WAIT=5 EOF ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 33 4 Ensure that you can accommodate up to seven more Ethernet devices/ENIs and restart wicked For example: hanamaster:# for dev in eth{17} ; do ln s f ifcfg ethN /etc/sysconfig/network/ifcfg ${dev} done hanamaster:# systemctl restart wicked 5 Create and attach a new ENI to the instance 6 Reboot 7 After reboot modify /etc/iproute2/rt_tables Important: Repeat the following for each ENI that you attach to your instance For example: hanamaster:# cd /etc/iproute2 hanamaster:/etc/iproute2 # echo "2 eth1_rt" >> rt_tables hanamaster:/etc/iproute2 # ip route add default via 172161122 dev eth1 table eth1_rt hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default hanamaster:/etc/iproute2 # ip rule add from <ENI IP Address> lookup eth1_rt prio 1000 hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 1000: from <ENI IP address> lookup eth1_rt 32766: from all lookup main 32767: from all lookup default Notes ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 34 1 http://docsawsamazoncom/quickstart/latest/sap hana/ or https://s3amazonawscom/quickstart reference/sap/hana/latest/doc/SAP+HANA+Quick+Startpdf 2 http://d0awsstaticcom/enterprise marketing/SAP/SAP HANA onAWS Manual Setup Guidepdf 3 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 4 http://servicesapcom/instguides 5 http://servicesapcom/notes 6 http://docsawsamazoncom/gettingstarted/latest/awsgsg intro/introhtml 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/conceptshtml 8 http://awsamazoncom/sap/whitepapers/ 9 http: //d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_Implementation_Guidepdf 10 http://d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_High _Availability_Guide_v32pdf 11 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 12 https://awsamazoncom/answers/infrastructure management/ec2 scheduler/ 13 https://awsamazoncom/blogs/security/how toautomatically tagamazon ec2resources inresponse toapievents/ 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/Using_Tagshtml 15 https://awsamazoncom/blogs/aws/new awsresource tagging api/ 16 https://awsamazoncom/cloudwatch/ 17 https://awsamazoncom/marketplace 18 http://docsawsamazoncom/AWSCloudFormation/lat est/UserGuide/Gettin gStartedhtml 19 http://docsawsamazoncom/cli/latest/userguide/cli chap welcomehtml 20 http://docsawsamazoncom/lambda/latest/dg/getting startedhtml 21 https://awsamazoncom/ec2/systems manager/patch manager/ ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 35 22 https://helpsapcom/viewer/2c1988d620e04368aa4103bf26f17727/2000/e nUS/9731208b85fa4c2fa68c529404ffa75ahtml 23 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 24 http://docsawsamazoncom/cli/latest/userguide/cli ec2launchhtml 25 https://awsamazoncom/cloudformation/ 26 https://helpsapcom/viewer/6b944 45c94ae495c83a19646e7c3fd56/2000/e nUS/38ad53e538ad41db9d12d22a6c8f2503html 27 https://helpsapcom/viewer/6b94445c94ae495c83 a19646e7c3fd56/2000/e nUS/c622d640e47e4c0ebca8cbe74ff9550ahtml 28 https://helpsapcom/viewer/6b94445c94ae495c83a19646e7c3fd5 6/2000/e nUS/ea70213a0e114ec29724e4a10b6bb176html 29 https://launchpadsupportsapcom/#/notes/1984882/E 30 https://launchpadsupportsapcom/#/notes/1913302/E 31 https://wwwsusecom/communities/blog/upgrading running demand instances public cloud/ 32 https://awsamazoncom/partners/redhat/faqs/ 33 https://awsamazoncom/about aws/whats new/2016/12/amazon ec2 systems manager now offers patch management/ 34 https://awsamazoncom/ec2/systems manager/patch manager/ 35 http://docsawsamazoncom/systems manager/latest/userguide/systems manager patchhtml 36 https://docsawsamaz oncom/quickstart/latest/sap hana/welcomehtml 37 https://helpsapcom/doc/6b94445c94ae495c83a19646e7c3fd56/2001/en US/c622d640e47e4c0ebca8cbe74ff9550ahtml 38 https://launchpadsupportsapcom/#/notes/1984882/E 39 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 40 http://servicesapcom/sap/support/notes/1642148 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 36 41 http://servicesapcom/sap/support/notes/1821207 42 http://servicesapcom/sap/support/notes/1869119 43 http://servicesapcom/sap/support/notes/1873247 44 http://servicesapcom/sap/support/notes/1651055 45 http://servicesapcom/sap/support/notes/2484177 46 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 47 http://docsawsamazoncom/AWSEC2/latest/UserGuide/creating anami ebshtml 48 http://docsawsamazoncom/AWSEC2/latest/UserGuide/CopyingAMIshtml 49 https://servicesapcom/notes/1703435 50 http://awsamazoncom/s3 / 51 http://awsamazoncom/documentation/s3/ 52 http://awsamazoncom/iam/ 53 http://awsamazoncom/glacier/ 54 http://docsawsamazoncom/AmazonS3/latest/dev/object archivalhtml 55 http://awsamazoncom/cli/ 56 http://docsawsamazoncom/cli/latest/reference/s3/ 57 http://docsawsamazoncom/cli/latest/reference/s3/synchtml 58 http://docsawsamazoncom/cli/latest/reference/s3/lshtml 59 http://se rvicesapcom/sap/support/notes/1651055 60 http://docsawsamazoncom/systems manager/latest/userguide/systems manager accesshtml 61 http://docsawsamazoncom/systems manager/latest/userguide/systems manager managedinstanceshtml#sysman service role 62 http://docsawsamazoncom/cli/latest/reference/s3/cphtml 63 https://helpsapcom/hana/SAP_HANA_Adminis tration_Guide_enpdf 64 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializehtml ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 37 65 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l 66 https://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml 67 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_SecurityG roupshtml 68 https://helpsapcom/saphelp_hanaplatform/helpdata/en/a9/326f20b39342 a7bc3d08acb8ffc68a/framesethtm 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml#creating security group 70 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#create_eni 71 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#attach_eni_running_stopped 72 https://helpsapcom/saphelp_hanaplatform/helpdata/en/bb/cb76c7fa7f45b 4adb99e60ad6c85ba/framesethtm 73 http://helpsapcom/saphelp_hanaplatform/helpdata/en/9a/cd6482a5154b7 e95ce72e83b04f94d/framesethtm 74 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 75 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml 76 https://supportsapcom/remote support/helphtml 77 http://awsamazoncom/directconnect/ 78 http://awsamazoncom/security/ 79 https://d0awsstaticcom/whitepapers/compliance/AWS_CIS_Foundations_ Benchmarkpdf 80 http://d0awsstaticcom/whitepapers/Security/AWS%20Security%20Whitep aperpdf ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 38 81 http://d0awsstaticcom/whitepapers/aws security best practicespdf 82 https://servicesapcom/sap/support/notes/1730999 83 https://servicesapcom/sap/support/notes/1731000 84 https://servicesapcom/sap/support/notes/1697613 85 http s://awsamazoncom/cloudtrail/ 86 https://awsamazoncom/sns/ 87 http://d0awsstaticcom/enterprise marketing/SAP/ saphana onawshigh availability disaster recovery guidepdf Archived
|
General
|
consultant
|
Best Practices
|
Secure_Content_Delivery_with_CloudFront
|
Secure Content Delivery with Amazon CloudFront Improve the Security and Performance of Your Applications While Lowering Your Content Delivery Costs November 2016 This paper has been archived For the latest technical content about secure content delivery with Amazon CloudFront see https://docsawsamazoncom/whitepapers/latest/secure contentdeliveryamazoncloudfront/securecontentdelivery withamazoncloudfronthtml Archived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Enabling Easy SSL/TLS Adoption 2 Using Custom SSL Certificates with SNI Custom SSL 3 Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS 4 Improving Performance of SSL/TLS Connections 5 Terminating SSL Connections at the Edge 6 Supporting Session Tickets and OCSP Stapling 6 Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination 7 Ensuring Asset Availability 8 Making SSL/TLS Adoption Economical 8 Conclusion 9 Further Reading 9 Notes 11 Archived Abstract As companies respond to cybercrime compliance requirements and a commitment to securing customer data their adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols increases This whitepaper explains how Amazon CloudFront improves the security and performance of your APIs and applications while helping you lower your content delivery costs It focuses on three specific benefits of using CloudFront: easy SSL adoption with AWS Certificate Manager (ACM) and Server Name Indication (SNI) Custom SSL support improved SSL performance with SSL termination available at all CloudFront edge locations globally and economical adoption of SSL thanks to free custom SSL certificates with ACM and SNI support at no additional charge ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 1 of 11 Introduction The adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols to encrypt Internet traffic has increased in response to more cybercrime compliance requirements (PCI v32) and a commitment to secure customer data A survey of the top 140000 websites revealed that more than 40 percent were secured by SSL 1 As measured by Alexa (an amazoncom company) 32 percent of the top million URLs were encrypted using HTTPS (also called HTTP over TLS HTTP over SSL and HTTP Secure) in September 20162 an increase of 45 percent from the same month in 2015 Amazon CloudFront is moving in this direction with a rapidly increasing share of global content traffic on CloudFront delivered over SSL/TLS CloudFront integrates with AWS Certificate Manager (ACM) for SSL/TLSlevel support to ensure secure data transmission using the most modern ciphers and handshakes Figure 1 shows how this secure content delivery works Figure 1: Secure content delivery with CloudFront and the AWS Certificate Manager SSL/TLS on CloudFront offers these key benefits (summarized in Table 1) : Ease of use Improved performance ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 2 of 11 Lower costs The integration of CloudFront with ACM reduces the time to s et up and deploy SSL/TLS certificates and translates to improved HTTPS availability and performance Finally certificates and encrypted data rates are offered at very low charge These benefits are discussed in detail in the following sections Table 1: Summary of the key benefits of SSL/TLS on CloudFront Ease of Use Improved Performance Lower Costs Integrated with ACM Procurement of new certificate directly from CloudFront console Automatic certificate distribution globally Automatic certificate renewal Revocation management SNI Custom SSL support Support for standards (eg Apple iOS ATS and PCI) SSL management in AWS environment HTTPS capability at all global edge locations SSL/TLS termination close to viewers Latency reduction with Session Tickets and OCSP stapling Free custom SSL/TLS certificate with ACM SNI Custom SSL/TLS at no additional charge No setup fees no hosting fees and no extra charges for the HTTPS bytes transferred Standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests Enabling Easy SSL/TLS Adoption All browsers have the capability to interact with secured web servers using the SSL/TLS protocol However both browser and server need an SSL certificate to establish a secure connection Support for SSL certificate management requires working with a Certificate Authority (CA) which is a thirdparty that is trusted by both the subject of the certificate (eg the content owner) and the party that relies on the certificate (eg the content viewer) The entire manual process of purchasing uploading and renewing valid certificates through thirdparty CAs can be quite lengthy AWS provides seamless integration between CloudFront and ACM to reduce the creation and deployment time of a new free custom SSL certificate and make certificate management a simpler more automatic process as shown in Figure 2 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 3 of 11 Custom SSL certificates allow you to deliver secure content using your own domain name (eg www examplecom) Although it typically takes a couple of minutes for a certificate to be issued after receiving approval it could take longer3 Once a certificate is issued or imported into ACM it is immediately available for use via the CloudFront console and automatically propagated to the global network of CloudFront edge locations when it is associated with distributions ACM automatically handles certificate renewal which makes configuring and maintaining SSL/TLS for your secure website or application easier and less error prone than by using a manual process In turn this help s you avoid downtime due to misconfigured revoked or expired certificates ACMprovided certificates are valid for 13 months and renewal starts 60 days prior to expiration If a certificate is compromised it can be revoked and replaced via ACM at no additional charge AWS ensures that private keys are never exported which removes the need to secure and track them Figure 2: CloudFront integration with ACM Using SSL Certificates with SNI Custom SSL You can use your own SSL certificates with CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL SNI is an extension of the TLS protocol that provides an efficient way to deliver content over HTTPS using your ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 4 of 11 own domain and SSL certificate SNI identifies the domain without the server having to examine the request body so it can offer the correct certificate during the TLS handshake SNI is supported by most modern browsers including Chrome 60 and later Safari 30 and later Firefox 20 and later and Internet Explorer 7 and later4 (If you need to support older browsers and operating systems you can use the CloudFront dedicated IPbased custom SSL for an additional charge) Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS You can leverage the combination of ACM SNI and CloudFront security features to help meet the requirements of many compliance and regulatory standards such as PCI Additionally CloudFront has “out ofthe box” support f or the industry standard Apple iOS App Transport Security (ATS) For more information on CloudFront security capabilities see Table 2 and Table 3 Table 2: Overview of CloudFront security capabilities Vulnerability CloudFront Security Capabilit ies Cryptographic attacks CloudFront frequently reviews the latest security standards and supports only viewer requests using SSL v3 and TLS v10 11 and 12 When available TLS v13 will also be supported CloudFront supports the strongest ciphers (ECDHE RSA AES128 GCM SHA256) and offers them to the clie nt in preferential sequence Export ciphers are not supported Patching Dedicated teams are responsible for monitoring the threat landscape handling security events and patching software Under t he shared security model AWS will take the necessary meas ures to remediate vulnerabilities with methods such as patching deprecation and revocation DDoS attacks CloudFront has extensive mitigation techniques for standard flood type attacks against SSL To thwart SSL renegotiation type attacks CloudFront dis ables renegotiation Table 3 : Amazon CloudFront support of Apple iOS ATS requirements Apple iOS ATS Requirement CloudFront Support TLS/SSL version must be TLS 12 CloudFront supports TLS 12 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 5 of 11 Apple iOS ATS Requirement CloudFront Support TLS Cipher Suite must be from the following with Perfect Forward Secrecy : CloudFront supports Perfect Forward Secr ecy with the following ciphers: ECDSA Certificates: RSA Certificates: TLS_ECDHE_ECDSA_WITH_AES_ 256_GCM_ SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDH E_ECDSA_WITH_AES_128_GCM_ SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDH E_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_E CDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDH E_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES _128_CBC_SHA TLS_E CDHE_ECDSA_WITH_AES_128_CBC_SHA RSA Certificates: TLS_ECDHE_RSA_WITH_AES_256_G CM_SHA384 TLS_EC DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_EC DHE_RSA_WITH_AES_256_CBC_SHA384 TLS_EC DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA Leaf server certs must be signed with the following : Server certificates signed with the following type of key: Rivest Shamir Adleman (RSA) key with a length of at least 2048 bits Rivest Shamir Adleman (RSA) key with a length of 2048 bits Elliptic Curve Cryptography (ECC) key with a size of at least 256 bits Improving Performance of SSL/TLS Connections You may see a degradation in the performance of your API or application when clients connect directly to your origin servers using SSL Setting up an SSL/TLS connection adds up to three round trips between the client and server introducing additional latency in the connection setup Once the connection is established additional CPU resources are required to encrypt the data that is transmitted ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 6 of 11 Terminating SSL Connections at the Edge When you enable SSL with CloudFront all global edge locations are used for handling your SSL traffic Clients terminate SSL connections at a nearby CloudFront edge location thus reducing network latency in setting up an SSL connection In addition moving the SSL termination to CloudFront helps you offload encryption to CloudFront servers that are specifically designed to be highly scalable and performance optimized These factors boost the performance of not only static content but also dynamic content For example Slack improved its performance when it migrated the delivery of its dynamic content to HTTPS with CloudFront The worldwide average response time to slackcom dropped from 488 milliseconds to 199 milliseconds (see Figure 3) A large portion of these performance benefits came from the decreased SSL negotiation time as the worldwide average for SSL connection times decreased from 215 milliseconds to 52 milliseconds Figure 3: Slack improved its performance by delivering its dynamic content via HTTPS with CloudFront Supporting Session Tickets and OCSP Stapling CloudFront further improves the performance of SSL connections with the support of Session Tickets and Online Certificate Status Protocol (OCSP) stapling (see Figure 4) Session Tickets help decrease the time spent restarting or resuming an SSL session CloudFront encrypts SSL session information and ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 7 of 11 stores it in a ticket that the client can use to resume a secure connection instead of repeating the SSL handshake process OCSP stapling improves the time taken for individual SSL handshakes by moving the OSCP check (a call used to obtain the revocation status of an SSL certificate) from the client to a periodic secure check by the CloudFront servers With OCSP stapling the CloudFront engineering team measured up to a 30 percent performance improvement in the initial connection between the client and the server Figure 4: Session Tickets decrease the time spent restarting or resuming an SSL session Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination With CloudFront you can strike a balance between security and performance by choosing between half bridge and full bridge TLS termination (see Figure 5) By defining different cache behaviors in the same distribution you can define which connections to the origin use HTTPS and which use HTTP You can configure objects that need secure connections to the origin to use HTTPS (eg login pages sensitive data) and configure objects that do not need secure connections to use HTTP (eg logos images) Thus everything can be securely transmitted to the client and origin fetches can be optimized to use HTTP to reduce the overall latency of the transaction ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 8 of 11 Figure 5: Balancing security and performance on the same distribution For full secure delivery you can configure CloudFront to require HTTPS for communication between viewers and CloudFront and optionally between CloudFront and your origin5 Also you can configure CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature When you enable HTTP to HTTPS Redirect CloudFront will respond to an HTTP request with a 301 redirect response that requires the viewer to resend the request over HTTPS Ensuring Asset Availability CloudFront puts significant focus on and dedication to maintaining the availability of your assets Availability is calculated based on how often an attempt was made to download a single object and how often the download failed As shown in Table 4 CloudFront SSL availability (as measured from real clients) across multiple regions is consistently high when compared to other top CDNs6 Table 4 : SSL /TLS traffic – availability by geography for July 2016 to August 2016 # CDN United States Europe Japan Korea 1 CloudFront SSL 9914 9935 9935 9922 2 CDN A 9870 9753 9864 9898 3 CDA B 9677 9444 9167 9819 Making SSL/TLS Adoption Economical CloudFront enables you to generate custom SSL/TLS certificates with ACM and support them with SNI at no additional charge These features are offered with ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 9 of 11 no setup fees no hosting fees and no extra charges for the HTTPS bytes transferred You simply pay standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests For more information see the Amazon C loudFront pricing page 7 For dedicated IP custom SSL there is an additional charge per month This additional charge is associated with dedicating multiple IP v4 addresses (a finite resource) for each SSL certificate at each CloudFront edge location Conclusion You can deliver your secure APIs or applications via SSL/TLS with Amazon CloudFront in an easy way at no additional charge and with improved SSL performance You can create free custom SSL/TLS certificates with AWS ACM in minutes and immediately add them to your CloudFront distributions at no additional charge with automatic SNI support You don’t have to manage certificate renewal because ACM takes care of it automatically and if any certificate is compromised you can revoke it and replace it via ACM You can do all of this while benefiting from improved SSL/TLS performance because of SSL/TLS terminations near your end user and CloudFront support of Session Tickets and OCSP stapling This also applies if you want to deliver dynamic content as CloudFront provides a way to increase performance and security at no additional charge Further Reading There is a wealth of information available in the following whitepapers blog posts user guides presentations and slides to help customers get a deeper understanding of CloudFront ACM and how SSL is used Amazon CloudFront Custom SSL Amazon CloudFront Custom SSL List of browsers supported by SNI Custom SSL AWS Certificate Manager ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 10 of 11 Getting started Managed certificate renewal FAQs Blogs Amazon CloudFront What’s New HTTP and TLS v11 v12 to the origin AWS Certificate Manager – Deploy SSL/TLSBased Apps on AWS Developers Guide Introduction to Amazon CloudFront Using an HTTPS Connection to Access Your Objects Slack Performance Improvement with Amazon CloudFront Video Slides re:Invent Presentations SSL with Amazon Web Services (SEC316) 11/2014 Using Amazon CloudFront For Your Websites & Apps STG206 10/2015 Secure Content delivery Using Amazon CloudFront STG205 10/2015 re:Invent Slides Secure Content Delivery Using Amazon CloudFront and AWS WAF ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 11 of 11 Notes 1 https://wwwtrustworthyinternetorg/sslpulse/ 2 http://httparchiveorg/trendsphp#perHttps 3 https://awsamazoncom/certificatemanager/faqs/ 4 https://enwikipediaorg/wiki/Server_Name_Indication 5 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Secu reConnectionshtml#SecureConnectionsHowToRequireCustomProcedure 6 http://wwwcedexiscom/getthedata/country report/?report=secure_object_delivery_response_time 7 https://awsamazoncom/cloudfront/pricing/ Archived
|
General
|
consultant
|
Best Practices
|
Securely_Access_Services_Over_AWS_PrivateLink
|
This paper has been archived For the latest technical content refer t o HTML version: https://docsawsamazoncom/whitepapers/latest/aws privatelink/awsprivatelinkhtml Securely Access Services Over AWS PrivateLink First published January 2019 Updated June 3 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 3 Contents Introduction 5 What Is AWS PrivateLink? 6 Why use AWS PrivateLink? 6 What are VPC Endpoints? 7 Interface endpoints 8 Gateway endpoi nts 8 How does AWS PrivateLink work? 9 Creating Highly Available Endpoint Services 10 Endpoint Specific Regional DNS Hostname 10 Zonal specific DNS Hostname 11 Private DNS Hostname 11 Private IP Address of the Endpoint Network Interface 11 Deploying AWS PrivateLink 12 AWS PrivateLink Considerations 12 AWS PrivateLink Configuration 15 UseCase Examples 15 Private Access to SaaS Applications 15 Shared Services 16 Hybrid Services 18 Presenting Microservices 19 InterRegion Endpoint Services 21 InterRegion Access to Endpoint Services 23 Conclusion 24 Contributors 24 Further Reading 25 Document Revisions 25 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 4 Abstract Amazon Virtual Private Cloud (Amazon VPC) gives AWS customers the ability to define a virtual private network within the AWS Cloud Customers can build services securely within an Amazon VPC and provide access to these services internally and externally using traditional methods such as an internet gateway VPC peering network address translation (NAT) a virtual private network (VPN) and AWS Direct Connect This whitepaper presents how AWS PrivateLink keeps network traffic private and allows connectivity fro m Amazon VPCs to services and data hosted on AWS in a secure and scalable manner This paper is intended for IT professionals who are familiar with the basic concepts of networking and AWS Each section has links to relevant AWS documentation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 5 Introduct ion The introduction of Amazon Virtual Private Cloud (Amazon VPC) in 2009 made it possible for customers to provision a logically isolated section of the AWS cloud and launch AWS resources in a virtual network that they define Traditional methods to acces s third party applications or public AWS services from an Amazon VPC include using an internet gateway virtual private network (VPN) AWS Direct Connect with a virtual private gateway and VPC peering Figure 1 illustrates an example Amazon VPC and its associated components: Figure 1: Traditional access from an Amazon VPC This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 6 What is AWS PrivateLink? AWS PrivateLink provides secure private connectivity between Amazon VPCs AWS services and onpremises applications on the AWS network As a result customers can simply and securely access services on AWS using Amazon’s private network powering connectivity to AWS services through interface Amazon VPC endpoints Refer to Figure 2 for Amazon VPCtoVPC connectivity using AWS PrivateLink Figure 2: Amazon VPCtoVPC connectivity with AWS PrivateLink AWS PrivateLink also allows customers to create an application in their Amazon VPC referred to as a service provider VPC and offers that application as an AWS PrivateLink enabled service or VPC endpoint service A VPC endpoint service lets customers host a service and have it acces sed by other consumers using AWS PrivateLink Why use AWS PrivateLink? Prior to the availability of AWS PrivateLink services residing in a single Amazon VPC were connected to multiple Amazon VPCs either (1) through public IP addresses using each VPC’s int ernet gateway or (2) by private IP addresses using VPC peering With AWS PrivateLink service connectivity over Transmission Control Protocol (TCP) can be established from the service provider’s VPC to the service consumers’ VPCs in a secure and scalable manner AWS PrivateLink provides the following three main benefits: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 7 Use Private IP Addresses for Traffic AWS PrivateLink provides Amazon VPCs with a secure and scalable way to privately connect to AWS hosted services AWS PrivateLink traffic does not use public internet protocols (IP) addresses nor traverse the internet AWS PrivateLink uses private IP addresses and security groups within an Amazon VPC so that services function as though they were hosted directly within an Amazon VPC Simplify Network Management AWS PrivateLink helps avoid both (1) security policies that limit benefits of internet gateways and (2) complex networking across a large number of Amazon VPCs AWS PrivateLink is easy to use and manage because it re moves the need to whitelist public IPs and manage internet connectivity with internet gateways NAT gateways or firewall proxies AWS PrivateLink allows for connectivity to services across different accounts and Amazon VPCs with no need for route table mo difications There is no longer a need to configure an internet gateway VPC peering connection or Transit VPC to enable connectivity A Transit VPC connects multiple Amazon Virtual Private Clouds that might be geographically disparate or running in separ ate AWS accounts to a common Amazon VPC that serves as a global network transit center This network topology simplifies network management and minimizes the number of connections that you need to set up and manage It is implemented virtually and does no t require any physical network gear or a physical presence in a colocation transit hub Facilitate Your Cloud Migration AWS PrivateLink gives on premises networks private access to AWS services via AWS Direct Connect Customers can more easily migrate traditional on premises applications to services hosted in the cloud and use cloud services with the confidence that traffic remains private What are VPC Endpoints? A VPC endpoint enables customers to privately connect to supported AWS services and VPC endpoint services powered by AWS PrivateLink Amazon VPC instances do not require public IP addresses to communicate with resources of the service Traffic between an Amazon VPC and a service does not leave the Amazon network This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 8 VPC endpoints are virtual devices They are horizontally scaled redundant and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth c onstraints on network traffic There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints Interface endpoints Interface endpoints enable connectivity to services over AWS PrivateLink These services include some AWS managed services services hosted by other AWS customers and partners in their own Amazon VPCs (referred to as endpoint services) and supported AWS Marketplace partner services The owner of a service is a service provider The principal creating the inte rface endpoint and using that service is a service consumer An interface endpoint is a collection of one or more elastic network interfaces with a private IP address that serves as an entry point for traffic destined to a supported service Interface endp oints currently support over 17 AWS managed services Check the AWS documentation for VPC endpoints for a list of AWS services that are available over AWS PrivateLink Gateway endpoints A gateway endpoint targets specific IP routes in an Amazon VPC route table in the form of a prefix list used for traffic destined to Amazon DynamoDB or Amazon Simple Storage Service (Amazon S3) Gateway endpoints do not enable AWS PrivateLink More information about gateway endpoints is in the Amazon VPC User Guide Instances in an Amazon VPC do not require public IP addresses to communicate with VPC endpoints as interface endpoints use local IP addresses within the consumer Amazon VPC Gateway endpoints are destinations that are reachable from within an Amazon VPC through prefix lists within the Amazon VPC’s route table Refer to Figure 3 showing connectivity to AWS services using VPC endpoints This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers 9 Figure 3: Connectivity to AWS services using VPC endpoints How does AWS PrivateLink work? AWS PrivateLink uses Network Load Balancers to connect interface endpoints to services A N etwork Load Balancer functions at the network transport layer (layer 4) and can handle millions of requests per second In the case of AWS PrivateLink it is represented inside the consumer Amazon VPC as an endpoint network interface Customers can specify multiple subnets in different Availability Zones to ensure that their service is resilient to an Availability Zone service disruption To achieve this they can create endpoint network interfaces in multiple subnets mapping to multiple Availability Zones An endpoint network interface can be viewed in the account but customers cannot manage it themselves For more information see Elastic Network Interfaces This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 10 Creat ing Highly Available Endpoint Services The creation of VPC endpoint services goes through four stages which we develop here The generation of a DNS hostname the use of private IP address the deployment of the endpoint and its configuration In Figure 4 the account owner of VPC B is a service provider and has a service running on instances in subnet B The owner of VPC B has a service endpoint (vpce svc1234) with an associated Network Load Balancer that points to the instances in subnet B as targets Instances in subnet A of VPC A use an interface endpoint to access the services in subnet B Figure 4: Detailed Amazon VPC toVPC connectivity with AWS PrivateLink When an interface endpoint is created endpoint specific Domain Name System (DNS) hostnames are generated that can be used to communicate with the service After creating the endpoint requests can be submitted to the provider’s service through one of the following methods: Endpoint Specific Regional DNS Hostname Customers generate an e ndpoint specific DNS hostname which includes all zonal DNS hostnames generated for the interface endpoint The hostname includes a unique This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 11 endpoint identifier service identifier the region and vpceamazonawscom in its name; for example : vpce0fe5b17a070 7d6abc29p5708sec2us east1vpceamazonawscom Zonal specific DNS Hostname Customers generate a zonal specific DNS hostname for each Availability Zone in which the endpoint is available The hostname includes the Availability Zone in its name; for exampl e: vpce0fe5b17a0707d6abc 29p5708s useast1aec2us east 1vpceamazonawsco Private DNS Hostname If enabled customers can use a private DNS hostname to alias the automatically created zonal specific or regional specific DNS hostnames into a friendly h ostname such as: myserviceexamplecom Private IP Address of the Endpoint Network Interface The private IP address of the endpoint network interface in the VPC is directly reachable to access the service in and across Availability Zones in the same way the zonal specific DNS hostname is Service providers that use zonal DNS hostnames to access the service can help achieve high availability by enabling cross zone load balancing Cross zone load balancing enables the load balancer to distribute traffic across the registered targets in all enabled Availability Zones Regional data transfer charges may apply to a service provider’s account when they enable cross zone load balancing as data could potentially transfer between Availability Zones In Figure 5 the owner of VPC B is the service provider and has configured a Network Load Balancer with targets in two different Availability Zones The service consumer (VPC A) has created interface endpoints in the same two Availability Zones in their Amazon VPC Requests to the service from instances in VPC A can use either interface This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 12 endpoint The DNS name resolution of the Endpoint Specific Regional DNS Hostname will alternate between the two IP addresses Figure 5: Round robin DNS load balancing Deploying AWS PrivateLink AWS PrivateLink Considerations When deploying an endpoint customers should consider the following: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 13 • Traffic will be sourced from the Network Load Balancer inside the service provider Amazon VPC When service consumers send traffic to a service through an interface endpoint the source IP addresses provided to the application are the private IP addresses of the Network Load Balancer nodes and not the IP addresses of the service consumers • Proxy Protocol v2 can be enabled to gain insight into the network traffic Network Load Balancers use Proxy Protocol v2 to send additional connection inform ation such as the source and destination This may require changes to the application • Proxy Protocol v2 can be enabled on the load balancer and the client IP addresses can be obtained from the Proxy Protocol header when IP addresses of the service consume rs and their corresponding interface endpoint IDs are needed • Customers can create an Amazon Simple Notification Service (SNS) to receive alerts for specific events that occur on the endpoints that are attached or when they attempt to attach to their endpo int service For example one can receive an email when an endpoint request is accepted or rejected for the endpoint service • The Amazon SNS topic that a customer can use for notifications must have a topic policy that allows the VPC endpoint service to publish notifications on your behalf Include the following statement in the topic policy: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 14 { "Version" : "20121017" "Statement" : [ { "Effect" : "Allow" "Principal" : { "Service" : "vpceamazonawscom" } "Action" : "SNS:Publish" "Resource" : "arn:aws:sns: region:account:topic name" } ] } For more information see the documentation on Authentication and Access Control for Amazon SNS • Endpoint services cannot be tagged • The private DNS of the endpoint does not resolve outside of the Amazon VPC For more information read accessing a service through an interface endpoint Note that private DNS hostnames can be configured to point to endpoint network interface IP addresses directly Endpoint services are available in the AWS Region in which they are created and can be accessed in remote AWS Regions using InterRegion VPC Peering • If an endpoint service is asso ciated with multiple Network Load Balancers then for a specific Availability Zone an interface endpoint will establish a connection with one load balancer only • Availability Zone names in a customer account might not map to the same locations as Availabi lity Zone names in another account For example the Availability Zone US EAST 1A might not be the same Availability Zone as US EAST 1A for another account An endpoint service gets configured in Availability Zones according to their mapping in a customer’s account • For low latency and fault tolerance we recommend creating a Network Load Balancer with targets in each available Availability Zone of the AWS Region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 15 AWS PrivateLink Configuration Full details on how to configure AWS PrivateLink can be found from the documentation on interface VPC endpoints UseCase Examples This section showcases some of the most common use cases for consuming and providing AWS PrivateLink endpoint services Private Access to SaaS Applications AWS PrivateLink enables Software asaService (SaaS) providers to build highly scalable and secure services on AWS Service providers can privately expose their service to thousands of customers on AWS with ease A SaaS (or service) provider can use a Network Load Balancer to target instances in their Amazon VPC which will represent their endpoint service Customers in AWS can then be granted access to the endpoint service and create an interface VPC endpoint in their own Amazon VPC that is associated with the endpo int service This allows customers to access the SaaS provider’s service privately from within their own Amazon VPC Follow the best practice of creating an AWS PrivateLink endpoint in each Availability Zone within the region that the service is deployed i nto This provides a highly available and lowlatency experience for service consumers Service consumers who are not already on AWS and want to access a SaaS service hosted on AWS can utilize AWS Direct Connect for private connectivity to the service provider Customers can use an AWS Direct Connect connection to access service provider services hosted in AWS For example a customer is interested in understanding their log data and selects a logging analytics SaaS offering hosted on AWS to ingest their lo gs in order to create visual dashboards One way of transferring the logs into the SaaS provider’s service is to send them to the public facing AWS endpoints of the SaaS service for ingestion With AWS PrivateLink the service provider can create an endpoi nt service by placing their service instances behind a Network Load Balancer enabling customers to create an interface VPC endpoint in their Amazon VPC that is associated with their endpoint service As a result customers can privately and securely transfer log data to an This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS P rivateLink 16 interface VPC endpoint in their Amazon VPC and not over public facing AWS endpoints See the following figure for an illustration Figure 6: Private connectivity to cloud based SaaS services Shared Services As customers d eploy their workloads on AWS common service dependencies will often begin to emerge among the workloads These shared services include security services logging monitoring Dev Ops tools and authentication to name a few These common services can be abstracted into their own Amazon VPC and shared among the workloads that exist in their own separate Amazon VPCs The Amazon VPC that contains and shares the common services is often referred to as a Shared Services VPC Traditionally workloads inside Ama zon VPCs use VPC peering to access the common services in the Shared Services VPC Customers can implement VPC peering effectively however there are caveats VPC peering allows instances from one This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 17 Amazon VPC to talk to any instance in the peered VPC Cust omers are responsible for implementing fine grained network access controls to ensure that only the specific resources intended to be consumed from within the Shared Services VPC are accessible from the peered VPCs In some cases a customer running at sca le can have hundreds of Amazon VPCs and VPC peering has a limit of 125 peering connections to a single Amazon VPC AWS PrivateLink provides a secure and scalable mechanism that allows common services in the Shared Services VPC to be exposed as an endpoin t service and consumed by workloads in separate Amazon VPCs The actor exposing an endpoint service is called a service provider AWS PrivateLink endpoint services are scalable and can be consumed by thousands of Amazon VPCs The service provider creates an AWS PrivateLink endpoint service using a Network Load Balancer that then only targets specific ports on specific instances in the Shared Services VPC For high availability and low latency we recommend using a Network Load Balancer with targets in at least two Availability Zones within a region A service consumer is the actor consuming the AWS PrivateLink endpoint service from the service provider When a service consumer has been granted permission to consume the endpoint service they create an interface endpoint in their VPC that connects to the endpoint service from the Shared Services VPC As an architectural best practice to achieve low latency and high availability we recommend creating an Interface VPC endpoint in each available Availab ility Zones supported by the endpoint service Service consumer VPC instances can use a VPC’s available endpoints to access the endpoint service via one of the following ways: (1) the private endpoint specific DNS hostnames that are generated for the inte rface VPC endpoints or (2) the Interface VPC endpoint’s IP addresses Onpremises resources can also access AWS PrivateLink endpoint services over AWS Direct Connect Create an Amazon VPC with up to 20 interface VPC endpoints and associate with the endpoin t services from the Shared Services VPC Terminate the AWS Direct Connect connection’s private virtual interface to a virtual private gateway Next attach the virtual private gateway to the newly created Amazon VPC Resources onpremises are then able to access and consume AWS PrivateLink endpoint services over the AWS Direct connection The following figure illustrates a shared services Amazon VPC using AWS PrivateLink This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 18 Figure 7: Shared Services VPC using AWS PrivateLink Hybrid Services As customers start their migration to the cloud a common architecture pattern used is a hybrid cloud environment This means that customers will begin to migrate their workloads into AWS over time but they will also start to use native AWS services to serve their clients In a Shared Services VPC the instances behind the endpoint service exist on the AWS cloud AWS PrivateLink allows you to extend resource targets for the AWS PrivateLink endpoint service to resources in an onpremises data center The Network Load Balancer for the AWS PrivateLink endpoint service can use resources in an on premises data center as well as instances in AWS Service consumers on AWS still access the AWS PrivateLink endpoint service by creating an interface VPC endpoint that is associated with the endpoint service in their VPC but the requests they make over the interface VPC endpoint will be forwarded to resources in the onpremises data center This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Servi ces Over AWS PrivateLink 19 The Network Load Balancer enables the extension of a service architecture to l oad balance workloads across resources in AWS and on premises resources and makes it easy to migrate tocloud burst tocloud or failover tocloud As customers complete the migration to the cloud on premises targets would be replaced by target instance s in AWS and the hybrid scenario would convert to a Shared Services VPC solution See the following figure for a diagram on hybrid connectivity to services over AWS Direct Connect Figure 8: Hybrid connectivity to services over AWS Direct Connect Presenting Microservices Customers are continuing to adopt modern scalable architecture patterns for their workloads A microservice is a variant of the service oriented architecture (SOA) that structures an application as a collection of loosely coupled services that do one specialized job and do it well AWS PrivateLink is well suited for a microservices environment Customers can give teams who own a particular service an Amazon VPC to develop and deploy their service in Once they are ready to deploy the service for consumption by other services they can create an endpoint service For example endpoint service may consist of a Network Load Balancer that can target Amazon Elastic Compute Cloud (Amazon EC2) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 20 instances or containers on Amazon Elas tic Container Service (Amazon ECS) Service teams can then deploy their microservices on either one of these platforms and the Network Load Balancer would provide access to the service A service consumer would then request access to the endpoint service a nd create an interface VPC endpoint associated with an endpoint service in their Amazon VPC The service consumer can then begin to consume the microservice over the interface VPC endpoint The architecture in Figure 9 shows microservices which are segment ed into different Amazon VPCs and potentially different service providers Each of the consumers who have been granted access to the endpoint services would simply create interface VPC endpoints associated with the given endpoint service in their Amazon V PC for each of the microservices it wishes to consume The service consumers will communicate with the AWS PrivateLink endpoints via endpoint specific DNS hostnames that are generated when the endpoints are created in the Amazon VPCs of the service consume r The nature of a microservice is to have a call stack of various microservices throughout the lifecycle of a request What is illustrated as a service consumer in Figure 9 can also become a service provider The service consumer can aggregate what it nee ds from the services it consumed and present itself as a higher level microservice This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 21 Figure 9: Presenting Microservices via AWS PrivateLink Inter Region Endpoint Services Customers and SaaS providers who host their service in a single region can extend their service to additional regions through Inter Region VPC Peering Service providers can leverage a Network Load Balancer in a remote region and create an IP target group that uses the IPs of their instance fleet in the remote region hosting the service InterRegion VPC Peering traffic leverages Amazon’s private fiber network to ensure that services communicate privately with the AWS PrivateLink endpoint service in the remote region This allows the service consumer to use local interface VPC endpoints to connect to an endpoint service in an Amazon VPC in a remote region Figure 10 shows Inter Region Endpoint services A service provider is hosting an AWS PrivateLink endpoint service in the US EAST 1 Region Service consumers of the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securel y Access Services Over AWS PrivateLink 22 endpoint service require the service provider to provide a local interface VPC endpoint that is associated with the endpoint service in the EUWEST 2 region Service providers c an use Inter Region VPC Peering to provide local endpoint service access to their customers in remote regions This approach can help the service providers gain the agility to provide the access their customers want while not having to immediately deploy their service resources in the remote regions but instead deploying them when they are ready If the service provider has chosen to expand their service resources into remote regions that are currently using Inter Region VPC Peering th e service provider will have to remove the targets from the Network Load Balancer in the remote region and point them to the targets in the local region Since the remote endpoint service is communicating with resources in a remote region additional laten cy will be incurred when the service consumer communicates with the endpoint service The service provider will also have to cover the costs for the Inter Region VPC Peering data transfer Depending on the workload this could be a long term approach for some service providers so long as they evaluate the pros and cons of the service consumer experience and their own operating model Figure 10: Inter Region Endpoint Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 23 Inter Region Access to Endpoint Services As customers expand their global footprint by deploying workloads in multiple AWS regions across the globe they will need to ensure that the services that depend on AWS PrivateLink endpoint services have connectivity from the region they are hosted in Customers can leverage Inter Region VPC Peering to enable services in another region to communicate with interface VPC endpoint terminating the endpoint service which directs traffic to the AWS PrivateLink endpoint service hosted in the remote region InterRegion VPC Peering traffic is transported over Amazon’s network and ensures that your services communicate privately to the AWS PrivateLink endpoint service in the remote Region Figure 11 visualizes the inter region access to endpoint services A customer has deployed a workload in th e EU WEST 1 Region that needs to access an AWS PrivateLink endpoint service hosted in the US EAST 1 Region The service consumer will first need to create an Amazon VPC in the Region where the AWS PrivateLink endpoint service is currently being hosted in They will then need to create an Inter Region VPC Peering connection from the Amazon VPC in their region to the Amazon VPC in the remote Region The service consumer will then need to create an interface VPC endpoint in the Amazon VPC in the remote Region that is associated with the endpoint service The workload in the service consumers Amazon VPC can now communicate with the endpoint service in the remote Region by leveraging Inter Region VPC Peering The service consumer will have to consider the addit ional latency when communicating with endpoint service hosted in the remote Region as well as the inter region data transfer costs between the two Regions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 24 Figure 11: Inter Region access to endpoint services Conclusion The AWS PrivateLink scenarios and best practices outlined in this paper can help you build secure scalable and highly available architectures for your services on AWS Consider your application’s connectivity requirements before choosing an Amazon VPC connectivity architect ure for your internal or external customers Contributors Contributors to this document include : • Ahsan Ali Global Accounts Solutions Architect Amazon Web Services • David Murray Strategic Solutions Architect Amazon Web Services • James Devine Senior Solutions Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Securely Access Services Over AWS PrivateLink 25 • Ikenna Izugbokwe Senior Solutions Architect Amazon Web Services • Matt Lehwess Principal Solutions Architect Amazon Web Services • Tom Clavel Senior Product Marketing Manager Amazon Web Services • Puneet Konghot Senior Product Manager Amazon Web Services Further Reading For additional information see: • Network toAmazon VPC Connectivity Options • AWS PrivateLink Document Revisions Date Description June 3 2021 Updates November 2020 Updates to Figures 6 7 and 8 for clarity January 2019 First publication
|
General
|
consultant
|
Best Practices
|
Security_at_Scale_Governance_in_AWS
|
ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 1 of 16 Security at Scale: Governance in AWS Analysis of AWS features that can alleviate onpremise challenges October 2015 This paper has been archived For the most recent security content see Best Practices for Security Identity and Compliance at https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 2 of 16 Table of C ontents Abstract 3 Introduction 3 Manage IT resources 4 Manage IT assets 4 Control IT costs 5 Manage IT security 6 Control physical access to IT resources 6 Control logical access to IT resources 7 Secure IT resources 8 Manage logging around IT resources 10 Manage IT performance 11 Monitor and respond to events 11 Achieve resiliency 12 ServiceGovernance Feature Index 13 Conclusion 15 References and Further Reading 16 ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 3 of 16 Abstract You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions An often overlooked benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more oversight security control and central automation This paper describes how you can achieve a high level of governance of your IT resources using AWS In conjunction with the AWS Risk and Compliance whitepaper and the Auditing Security Checklist whitepaper this paper can help you understand the security and governance features built in to AWS services so you can incorporate security benefits and best practices in building your integrated environment with AWS Introduction Industry and regulatory bodies have created a complex array of new and legacy laws and regulation s mandating a wide range of security and organizational governance measures As such research firms estimate that many companies are spending as much as 75% of their IT dollars to manage infrastructure and spending only 25% of their IT dollars on IT aspects that are directly related to the business their companies are providing One of the key ways to improve this metric is to more efficiently address the backend IT governance requirements An easy and effective way to do that is by leveraging AWS’s out ofthebox governance features While AWS offers a variety of IT governanceenabling features it can be hard to decide how to start and what to implement This paper looks at the common IT governance domains by providing the use case ( or the on premise challenge) the AWS enabling features and the associated governance value propositions of using those features This document is designed to help you achieve the objectives of each IT governance domain1 This paper follows the approach of the major domains of comm onlyimplemented IT governance frameworks (eg CoBIT ITIL COSO CMMI etc) ; however the IT governance domains through which the paper is organized are generic to allow any customer to use it to evaluate the governance features of using AWS versus what can be done with your onpremise resources and tools The following IT governance domains are discussed through a “usecase ” approach : I want to better 1 While this paper features a robust list of the governanceenabling features because new features are consistently being developed it is not inclusive of all the features available Additional tutorials developer tools documentation can be found at http://awsamazoncom/resources/ Manage my IT resources Manage my IT assets Control my IT costsManage my IT security Control logical access Control physical access Secure IT resources Log IT activitiesManage my IT performance Monitor IT events Achieve IT resiliencyArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 4 of 16 Manage IT resources Manage IT assets Identifying and managing your IT assets is the first step in effective IT governance IT assets can range from the high end routers switches servers hosts and firewalls to the applications services operating systems and other software assets deployed in your network An updated inventory of hardware and software assets is vital for decisions on upgrades and purchases tracking warranty status or for troubleshooting and security reasons It is becoming a business imperative to have an accurate asset inventory listing to provide on demand views and comprehensive reports Moreover comprehensive a sset inventories are specifically required for certain compliance regulations For example FISMA SOX PCI DSS and HIPAA all mandate accurate asset inventories as a part of their requirements However the nature of pieced together onpremise resources ca n make maintaining this listing arduous at best and impossible at worst Often organizations have to employ third party solutions to enable automation of the asset inventory listing and even then it is not always possible to see a detailed inventory of every type of asset on a single console Using AWS there are multiple features available for you to quickly and easily obtain an accurate inventory of your AWS IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides a sum marized listing of IT resources by detailing usage of each service by region Learn more Amazon Glacier vault inventory Provides Glacier data inventory by showing all IT resources in Glacier Learn more AWS CloudHSM Provides virtual and physical control over encryption keys by providing customer dedicated HSMs for key storage Learn more AWS Data Pipeline Task Runner Provides automated processing of tasks by polling the AWS Data Pipeline for tasks and then performing and reporting status on those tasks Learn more AWS Management Console Provides a real time inventory of assets and data by showing all IT resources running in AWS by service Learn more AWS Storage Gateway APIs Provide the capability to programmatically inventory assets and data by programming interfaces tools and scripts to manage reso urces Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 5 of 16 Control IT c osts You can better control your IT costs by obtaining resources in the most cost effective way by understand ing the costs of your IT services However managing and tracking the costs and ROI associated with IT resource spend onpremise can be difficult and inaccurate because the calculations are so complex; capacity planning predictions of use purchasing costs depreciation cost of capital and facilities costs are just a few aspects that make total cost of ownership difficult to calculate Using AWS there are multiple features available for you to easily and accurately understand and control your IT resource costs U sing AWS you can achieve cost savings of up to 80% compared to the equ ivalent on premises deployments2 Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides an anytime view of spending on IT resources by showing resources used by service Learn more Amazon EC2 i dempotency instance launch Helps p revent erroneous launch of resources and incurrence of additional costs by preventing timeouts or connection errors from launching additional instances Learn more Amazon EC2 r esource tagging Provides association between resource expenditures and business units by applying custom searchable labels to compute resources Learn more AWS Account Billing Provides easy touse billing features that help you monitor and pay your bill by detailing resources used and associated actual compute costs incurred Learn more AWS Management Console Provides a one stop shop view for cost drivers by showing all IT resources running in AWS by service including actual costs and run rate Learn more AWS service pricing Provides definitive awareness of AWS IT resource rates by providing pricing for each AWS product and specific pricing characteristics Learn more AWS Trusted Advisor Helps o ptimize cost of IT resources by identifying unused and idle resources Learn more Billing Al arms Provides proactive alerts on IT resource spend by sending notifications of spending activity Learn more Consolidated billing Provides centralized cost control and cross account cost visibility by combining multiple AWS accounts into one bill Learn more 2 See the Total Cost of Ownership Whitepaper for more information on overall cost savings using AWS ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 6 of 16 Payasyougo pricing Provides computing resources and services that you can use to build applications within minutes at pay asyougo pricing with no up front purchase costs or ongoing maintenance costs by automatically scaling into multiple servers when demand for your application increases Learn more Manage IT security Control p hysical access to IT resources Physical access management is a key component of IT governance programs In addition to the locks security alarms access controls and surveillance videos that define the traditional components of physical security the electronic controls over physical access are also paramount to effective physical security The traditional physical security industry is in rapid transition and areas of specialization are surfacing making physical security vastly more complex As the onpremise physical security considerations and controls have become more complex there is an increased need for uniquely qualified and specialized IT security professionals to manage the significant effort required to achieve effective physical control around access credentials for cards/card readers controllers and system servers for hosting data around physical security Using AWS you can easily and effectively outsource controls related to physical security of your AWS infrastructure to AWS specialists with the skillsets and resources needed to secure the physical environment AWS has multiple different independent auditors validate the data center physical security throughout the year attesting to the design and detailed testing of the effectiveness of our physical security controls Learn more about the AWS audit programs and associated physical security controls below: AWS governance enabling feature How you get security at scale AWS SOC 1 physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS SOC 2 Security physical access controls Provides transparency into the controls in place that p revent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS PCI DSS physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers relevant to the Payment Card Industry Data Security Standard Controls are properly designed tested and audited by an independent audit firm Learn more AWS ISO 27001 physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the ISO 27002 security best practice s tandard Controls are properly designed tested and audited by an independent audit firm Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 7 of 16 AWS FedRAMP physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the NIST 800 53 best practice standard Controls are properly des igned tested and audited by a government accredited independent a udit firm Learn more Control logical a ccess to IT resources One of the primary objectives of IT governance is to effectively manage logical access to computer systems and data However many organizations are struggling to scale their onpremise solutions to meet the growing and continuously changing number of considerations and complexities around logical access including the ability to establish a rule of least privilege manage permissions to resources address changes in roles and information needs and the growth of sensitive data Major persistent challenges for managing logical access in an onpremise environment are providing users with access based on: Role (ie internal users contractors outsiders partners etc) Data classification (ie confidential internal use only private public etc) Data type (ie credentials personal data contact information workrelated data digital certificates cognitive passwords etc) There are multiple control features AWS offers you effectively manage your logical access based on a matrix of use cases anchored in least privilege Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon S3 Access Control Lists (ACLs) Provides central permissions and conditions by adding specific conditions to control how a user can use AWS such as time of day their originating IP address whether they are using SSL or whether they have authenticated with a Multi Factor Authentication device Learn more here and here Amazon S3 Bucket Policies Provides the ability to create conditional rules for managing access to their buckets and objects by allowing you to restrict access based on account as well as request based attributes such as HTTP referrer and IP address Learn more Amazon S3 Query String Authentication Provides the ability to give HTTP or browser access to resources that would normally require authentication by using the signature in the query string to secure the request Learn more AWS CloudTrail Provides logging of API or console actions (eg log if someone changes a bucket policy stops and instance etc) allowing advanced monitor ing capabilities Learn more AWS IAM Multi Fact or Authentication (MFA) Provides enforcement of MFA across all resources by requiring a token to sign in and access resources Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 8 of 16 AWS IAM password policy Provides the ability to manage the quality and controls around your users’ passwords by allowing you to set a password policy for the passwords used by IAM users that specifies that passwords must be of a certain length must include a selection of charact ers etc Learn more AWS IAM Permissions Provides the ability to easily manage permissions by letting you specify who has access to AWS resources and wha t actions they can perform on those resources Learn more AWS IAM Policies Enables you to achieve detailed least privilege access management by allowing you to create multiple users within your AWS account assign them security credentials and manage their permissions Learn more AWS IAM Roles Provides the ability to temporarily delegate access to users or services that normally don't have access to your AWS resources by defining a set of permissions to access the resources that a user or service needs Learn more AWS Trusted Advisor Provides automated security management assessment by identifying and escalating possible security and permission issues Learn more Secure IT resources Securing IT resources is the cornerstone of IT governance programs However for onpremise environments there is a litany of security steps that must be taken when a new server is brought online For example firewall and access control policies must be updated the newly created server image must be verified to be in compliance with security policy and all software packages have to be up to date Unless these security tasks are automated and delivered in a way that can keep up with the highly dynamic needs of the business organizations working solely with traditional governance approaches will either cause users to work around the security controls or will cause costly delays for the business AWS provides multiple security features that enable you to easily and effectively secure your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Linux AMIs Provides the ability to c onsistently deploy a " gold" (hardened) image by developing a private image to be used in all instance deployments Learn more Amazon EC2 Dedicated Instances Provides a private isolated virtual network and ensures that your Amazon EC2 compute instances are be isolated at the hardware level and launching these instances into a VPC Learn more Amazon EC2 instance launch wizard Enables consi stent launch process by providing restrictions around machine images available when launching instances Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 9 of 16 Amazon EC2 security groups Provides granular control over inbound and outbound traffic by acting as a firewall that controls the traffic for one or more instances Learn more Amazon Glacier archives Provides inexpensive long term storage service for securing and durably storage for data archiving and backup using AES 256 bit encryption by default Learn more Amazon S3 Client Side Encryption Provides th e ability to encrypt your data before sending it to Amazon S3 by building your own library that encrypts your objects data on the client side before uploading it to Amazon S3 The AWS SDK for Java can also automatically encrypt your data before uploading i t to Amazon S3 Learn more Amazon S3 Server Side Encryption Provides encryption of objects at rest and keys managed by AWS by using AES 256 bit encryption for Amazon S3 data Learn more Amazon VPC Provides a virtual network closely resembling a traditional network that is operated on premise but with benefits of usi ng the scalable infrastructure of AWS Allows you to create logically isolated section s of AWS where you can launch AWS resources in a virtual network that you define Learn more Amazon VPC logical isolation Provides virtual isolation of resources by allowing machine images to be isolated from other networked resources Lear n more Amazon VPC network ACLs Provides ‘firewall type’ isolation for associated subnets by controlling inbound and outbound traffic at the subnet level Learn more Amazon VPC private IP address es Helps p rotect private IP addresses from internet exposure by routing their traffic through a Network Address Translation (NAT) instance in a public subnet Learn more Amazon VPC security groups Provides ‘firewall type’ isolation for associated Amazon EC2 instances by controlling inbound and outbound traffic at the instance level Learn more AWS CloudFormation templates Provides the ability to c onsistently deploy a specific machine image along with other resources and conf igurations by provisioning infrastructure with scripts Learn more AWS Direct Connect Removes need for a publi c Internet connection to AWS by establishing a dedicated network connection from your premises to AWS ’ datacenter Learn more Onpremise hardware/software VPN connections Provides granular control over network security by allowing secure connectio ns from existing network to AWS Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 10 of 16 Virtual private gateways Provides granular control over network security by providing a way to create a Hardware VPN Connection to your VPC Learn more Manage logging around IT resources A major enabler of securing IT is the logging around IT resources Logging is critically important to IT governance for a variety of use cases including but not limited to: detecting/tracking suspicious behavior supporting forensic analysis meeting compliance requirements supporting IT/networking maintenance and operations managing/reducing IT security costs monitoring service levels and supporting internal business processes Organizations are increasingly dependent on effective log management to support core governance functions including cost management service level and line ofbusiness application monitoring and other IT security and compliance focused activities The SANS Log Management Survey consistently shows that organizations are continuously seeking more uses from their logs but are encountering friction in their ability to achieve that use cases using onpremise resources to collect and analyze those logs With more log types to collect and analyze from different IT resources organizations are challenged by the manual overhead associated with normalizing log data that is in widely different formats as well as with the searching correlating and reporting functionalities Log management is a key capability for security monitoring compliance and effective decisionmaking for the tens or hundreds ofthousands of activities each day Using AWS there are multiple logging features that enable you to effectively log and track the use of your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon CloudFront access log s Provides log files with information about end user access to your objects Logs can be distributed directly to a specific Amazon S3 bucket Learn more Amazon RDS database logs Provides a way to monitor a number of log files generated by your Amazon RDS DB Instances Used to diagnose trouble shoot and fix database configuration or performance issues Learn more Amazon S3 Object Expiration Provides automated log expiration by schedul ing removal of objects after a defined time period Learn more Amazon S3 server access logs Provides logs of access requests with details about th e request such as the request type the resource with which the request was made and the time and date that the request was processed Learn more AWS CloudTrail Provides log s of security actions done via the AWS Management Console or APIs Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 11 of 16 Manage IT performance Monitor and respond to event s IT performance management and monitoring has become a strategically important part of any IT governance program IT monitoring is an essential element of governance that allows you to prevent detect and correct IT issues that may impact performance and/or security The key governance challenge in onpremise environments around IT performance management is that you are faced with multiple monitoring systems to manage every layer of your IT resources and the mix of proprietary management tools and IT processes results in a systemic complexity that can at best slow response times and at worst impact the effectiveness of your IT performance monitoring and management Moreover the increasing complexity and sophistication of security threats mean that event monitoring and response capabilities need to continuously and rapidly evolve to address emerging threats As such onpremise performance management is continuously faced with growing challenges around infrastructure procurement scalability ability to simulate test conditions across multiple geographies etc Using AWS there are multiple monitoring features that enable you to easily and effectively monitor and manage your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Cloud Watch Provides statistical data you can use to view analyze and set alarm s on the operational behavior of your instances These metrics include CPU utilization network traffic I/O and latency Learn more Amazon Cloud Watch alarms Provides consistent alarming for critical events by providing custom metrics alarms and notifications for event s Learn more Amazon EC2 i nstance status Provides instance status checks that summarize results of automated tests and provides information about c ertain acti vities that are scheduled for your instances Uses automated checks to detect whether specific issues are affecting your instances Learn more Amazon Incident Management Team Provides continuous incident detection monitoring and management with 24 7365 staff operators to support detection diagnostics and resolution of certain security events Learn more Amazon S3 TCP selective acknowledgement Provides the ability to improve recovery time after a large number of packet losses Learn more Amazon Simple Notification Service Provides consistent alarming for critical events by managing the delivery of messages to subscribing endpoints or clients Learn more AWS Elastic Beanstalk Provides ability to monitor application deployment details of capacity provisioning load balancing auto scaling and application health monitoring Learn more Elastic Load Balancing Provides the ability to automatically distribute your incoming application traffic across multiple Amazon EC2 instances by detecting ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 12 of 16 over utilized instances and rerouting traffic to underutilized instances Learn more Achieve resiliency Data protection and disaster recovery planning should be a priority component of IT governance for all organizations Arguably the value of DR is not in question; every organization is concerned about its ability to get back up and running after an event or disaster But implementing governance around IT resource resiliency can be expensive and complex as well as tedious and timeconsuming Organizations are faced with a growing number of events that can cause unplanned downtime and operational blockers These events can be caused by technical problems (eg viruses data corruption human error etc) or natural phenomena (eg fires floods power failures weatherrelated outages etc) As such organizations are faced with increasing costs and complexity in planning testing and operating onpremise failover sites because of continual data growth In the face of these challenges cloud computing’s server virtualization enables the quality resiliency programs to be feasible and costeffective Using AWS there are multiple features that enable you to easily and effectively achieve resiliency for your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon EBS snapshots Provides highly available highly reliable predictable storage volumes with incremental point in time backup control of server data Learn more Amazon RDS Multi AZ Depl oyments Provides the ability to safeguard your data in the event with automated availability controls homogenous resilient architecture Learn more AWS Import/Export Provides the ability to move massive amounts of data locally by creating import and export jobs quickly using Amazon’s high speed internal network Learn more AWS Storage Gateway Provides seamless and secure integration between your on premises IT environment and AWS's storage infrastructure by scheduling snapshots that the gateway stores in Amazon S3 in the form of Amazon EBS snapshots Learn more AWS Trusted Advisor Provides automated performance management and availability control by identifying options to increase the availability and redundancy of your AWS application Learn more Extensive 3rd Party Solutions Provides secure data storage and automated availability control by easily connecting you with a market of applications of tools Learn more Managed AWS No SQL/SQL Database Services Provides secure and durable data storage automatically replicating data items across multiple Availability Zones in a Region to provide built in high av ailability and data durability Learn more: Amazon D ynamo DB ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 13 of 16 Amazon RDS Multi region deployment Provides geo diversity in compute locations power grids fault lines etc providing a variety of locations Learn more Route 53 health checks and DNS failover Monitors availability of stored backup data by allowing you to configure DNS failover in active active active passive and mixed configurations to improve the availability of your application Learn more Service Governance Feature Index The information above is presented by governance domain For your reference a summary of governance feature by major AWS services is described in the table below: AWS Service Governance Feature Amazon EC2 Amazon EC2 idempotency instance launch Amazon EC2 resource tagging Amazon Linux AMIs Amazon EC2 Dedicated Instances Amazon EC2 instance launch wizard Amazon EC2 security groups Elastic Load Balancing Elastic Load Balancing traffic distribution Amazon VPC Amazon VPC Amazon VPC logical isolation Amazon VPC network ACLs Amazon VPC private IP addresses Amazon VPC security groups Onpremise hardware/software VPN connections Amazon Route 53 Amazon Route 53 latency resource record sets Route 53 health Checks and DNS failover AWS Direct Connect AWS Direct Connect Amazon S3 Amazon S3 Access Control Lists (ACLs) ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 14 of 16 Amazon S3 Bucket Policies Amazon S3 Query String Authentication Amazon S3 Client Side Encryption Amazon S3 Server Side Encryption Amazon S3 Object Expiration Amazon S3 server access logs Amazon S3 TCP selective acknowledgement Amazon S3 TCP window scaling Amazon Glacier Amazon Glacier vault inventory Amazon Glacier archives Amazon EBS Amazon EBS snapshots AWS Import/Export AWS Import/Export bulk datano… AWS Storage Gateway AWS Storage Gateway integration AWS Storage Gateway APIs Amazon CloudFront Amazon CloudFront Amazon CloudFront access logs Amazon RDS Amazon RDS database logs Amazon RDS Multi AZ Deployments Managed AWS No SQL/SQL Database Services Amazon Dynamo DB Managed AWS No SQL/SQL Database Services AWS Management Console Account Activity page AWS Account Billing AWS service pricing AWS Trusted Advisor Billing Alarms Consolidated billing Payasyougo pricing ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 15 of 16 AWS CloudTrail Amazon Incident Management Team Amazon Simple Notification Service Multi region deployment AWS Identity and Access Management (IAM) AWS IAM Multi Factor Authentication (MFA) AWS IAM password policy AWS IAM Permissions AWS IAM Policies AWS IAM Roles Amazon CloudWatch AWS CloudWatch Dashboard Amazon CloudWatch alarms AWS Elastic Beanstalk AWS Elastic Beanstalk monitoring AWS CloudFormation AWS CloudFormation templates AWS Data Pipeline AWS Data Pipeline Task Runner AWS CloudHSM CloudHSM key storage AWS Marketplace Extensive 3rd Party Solutions Data Centers AWS SOC 1 physical access controls AWS SOC 2 Security physical access controls AWS PCI DSS physical access controls AWS ISO 27001 physical access controls AWS FedRAMP physical access controls Conclusion The primary focus of IT Governance is around managing resources security and performance in order to deliver value in strategic alignment with the goals of the business Given the rate growth and increasing complexity in technology it is increasingly challenging for onpremise environments to scale to provide the granular controls and features needed to deliver quality IT governance in a costefficient manner F or the s ame reasons that delivering infrastructure in the cloud has benefits over on premise delivery cloud based governance offers a lower cost of entry easier operations and improved agility by providing more oversight and automation that enables organizations to focus on their business ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 16 of 16 References and Further Reading What can I do with AWS? http://awsamazoncom/solutions/awssolutions/ How can I get started with AWS? http://docsawsamazoncom/gettingstarted/latest/awsgsgintro/gsgaws introhtml
|
General
|
consultant
|
Best Practices
|
Security_at_Scale_Logging_in_AWS
|
ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 1 of 16 Security at Scale : Lo gging in AWS How AWS CloudTrail can hel p you achiev e compliance by logging API calls and changes to resources October 2015 This paper has been archived For the latest technical content refer to: https://docsawsamazoncom/wellarchitected/latest/securitypillar/ detectionhtmlArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 2 of 16 Table of Contents Abstract 3 Introduction 3 Control Access to Log Files 4 Obtain Alerts on Log File Creation and Misconfiguration 5 Receive Alerts for Log File 5 Creation and Misconfiguration 5 Manage Changes to AWS Resources and Log Files 6 Storage of Log Files 7 Generate Customized Reporting of Log Data 7 Generate Customized Reporting of Log Data 8 Conclusion 8 Additional Resources 9 Appendix: Compliance Program Index 10 ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 3 of 16 Abstract The logging and monitoring of API calls are key components in security and operational best practices as well as requirements for industry and regulatory compliance AWS CloudTrail is a web service that records API calls to supported AWS services in your AWS account and delivers a log file to your Amazon Simple Storage Service (Amazon S3) bucket AWS CloudTrail alleviates common challenges experienced in an onpremise environment and in addition to making it easier for you to demonstrate compliance with policies or regulatory standards the service makes it easier for you to enhance your security and operational processes This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements There is no additional charge for AWS CloudTrail aside from standard charges for S3 for log storage and SNS usage for optional notification Introduction Amazon Web Services (AWS) provides a wide variety of ondemand IT resources and services that you can launch and manage with pay asyougo pricing Recording the AWS API calls and associated changes in resource configuration is a critical component of IT governance security and compliance AWS CloudTrail provides a simple solution to record AWS API calls and resource change s that helps alleviate the burden of on premises infrastructure and storage challenges by helping you to build enhanced preventative and detective security controls for your AWS environment Onpremises logging solutions require installing agents setting up configuration files and centralized log servers and building and maintaining expensive highly durable data stores to store the data AWS CloudTrail eliminates this burdensome infrastructure setup and allows you to turn on logging in as little as two clicks and get increased visibility into all API calls in your AWS account CloudTrail continuously captures API calls from multiple servers into a highly available processing pipeline To turn on CloudTrail you simply signin to the AWS Management Console navigate to the CloudTrail console and click to enable logging Learn more about services and regions available for use with AWS CloudTrail on the AWS CloudTrail website This paper was developed by taking an inventory of logging requirements across common compliance frameworks (eg ISO 27001:2005 PCI DSS v20 FedRAMP etc) and combining those into generalized controls and logging domains You may leverage this paper for a variety of usecases such as security and operational bestpractices compliance with internal policies industry standards legal regulations and more The paper is written generic ally to allow anyone to understand how AWS CloudTrail can enhance your existing logging and monitoring activities ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 4 of 16 Control Access to Log Files To maintain the integrity of your log data it is important to carefully manage access around the generation and storage your log files The ability to view or modify your log data should be restricted to authorized users A common logrelated challenge for onpremise environments is the ability to demonstrate to regulators that access to log data is restricted to authorized users This control can be timeconsuming and complicated to demonstrate effectively because most onpremise environments do not have a single logging solution or consistent logging security across all systems With AWS CloudTrail access to Amazon S3 log files is centrally controlled in AWS which allows you to easily control access to your log files and help demonstrate the integrity and confidentiality of your log data Control Access to Log Files Common logging requirements How AWS CloudTrail can he lp you achieve compliance with requirements Controls exist to prevent unauthorized access to logs AWS CloudTrail provides you the ability to restrict access to your log files You can prevent and control access to make changes to your log file data by configuring your AWS Identity and Access Management (IAM) roles and Amazon S3 bucket policies to enforce read only access to your log files Learn more Additionally you can fortify your authentication and authori zation controls by enabling AWS Multi Factor Authentication (AWS MFA) on your Amazon S3 bucket(s) that store(s) your AWS CloudTrail logs Learn more Controls exist to ensure access to log records is rolebased AWS CloudTrail provides you the ability to control user access on your log files based on detailed role based provisioning AWS Identity and Access Management (IAM) enables you to securely control access to AWS CloudTrail for your users; And using IAM r oles and Amazon S3 bucket policies you can enforce role based access to the S3 bucket that stores your AWS CloudTrail log files Learn More ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 5 of 16 Obtain Alerts on Log File Creation and Misconfiguration Nearr ealtime alerts to misconfigurations of logs detailing API calls or resource changes is critically important to effective IT governance and adherence to internal and external compliance requirements Even from an operational perspective it is imperative that logging is configured properly to give you the ability to oversee the activities of your users and resources However variability and breadth of logging infrastructure in onpremise environments has made it overwhelming to actively monitor and alert you when there are misconfigurations or changes to your logging configuration Once you enable AWS CloudTrail for your account the service will deliver log files to your S3 bucket Optionally CloudTrail will publish notifications for log file deliveries to an SNS topic so that you can take action upon delivery These alerts include the Amazon S3 bucket log file address to allow you to quickly access object metadata about the event from the source log files Moreover your AWS Management Console will alert you if your log files are misconfigured and therefore logging is no longer taking place Receive Alerts for Log File Creation and Misconfiguration Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide a lerts when logs are created or fail and follow organization defined actions in the event of a misconfiguration AWS CloudTrail p rovides you immediate notification related to problems with your logging configuration through your AWS Management Console Learn more Alerts related to log misconfiguration will direct users to relevant logs for additional details (and will not divulge unnecessary amount of detail) AWS CloudTrail records the Amazon S3 b ucket log file address every time a new log file is written AWS CloudTrail publishes notifications for log file creation so that customers can take near realtime action when log files are created The notification is delivered to your Amazon S3 bucket and is show n in the AWS Management Console Optionally Amazon SNS messages can be pushed to mobile devices or distributed services configured via API or the AWS Management Console The SNS message for log file creation provides the log file address which limits the information divulged to only the necessary amount while also enabling you to easily link to obtain additional event details Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 6 of 16 Manage Change s to AWS Resources and Log Files Understanding the changes made to your resources is a critical component of IT governance and security Moreover preventing changes and unauthorized access to th is log data directly impacts the integrity of your change management processes and your ability to comply with internal industry and regulatory requirements around change management A major challenge faced in onpremise environments is the ability to log resource changes or changes to logs because there are only finite resources at your disposal to monitor what feels like an infinite amount of data AWS CloudTrail allows you to track the changes that were made to an AWS resource including creation modification and deletion Additionally by reviewing the log history of API calls AWS CloudTrail helps you investigate an event to determine if unauthorized or unexpected changes occurred by reviewing who initiated them when they occurred and where they originated Optionally CloudTrail will publish notifications to an SNS topic so that you can take action upon delivery of the new log file to your Amazon S3 bucket Manage Changes to IT Resources and Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide log of changes to system components (includi ng creation and deletion of system level objects) AWS CloudTrail p roduces log data on system change event s to enable tracking of changes made to your AWS resources AWS CloudTrail provides visibility into any changes made to your AWS resource from its c reation to deletion by loggin g changes made using API calls via the AWS Management Console the AWS Command Line Interface (CLI) or the AWS Software Development Kits (SDKs) Learn more Controls exist to prevent modifications to logs of changes or failures associated with logs By default API call log files are encrypted using S3 Server Side Encryption (SSE) and placed into your S3 bucket Modifications to log data can be controlled through use of IAM a nd MFA to enforce read only access to your Amazon S3 bucket that stores your AWS CloudTrail log files Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 7 of 16 Storage of Log Files Industry standards and legal regulations may require that log files be stored for varying periods of time For example PCI DSS requires logs be stored for one year HIPAA requires that records be retained for at least six years and other requirements mandate longer or variable storage periods depending on the data being logged As such managing the requirements for log file storage for different data on different systems can be an administrative and technological burden Moreover storing and archiving large volumes of log data in a persistent and secure way can be a challenge for many organizations AWS CloudTrail is designed to seamlessly integrate with Amazon S3 and Amazon Glacier allowing customization of S3 buckets and lifecycle rules to suit your storage needs AWS CloudTrail provides you an indefinite expiration period on your logs so you can customize the period of time you store your logs to meet your regulators’ requirements Storage of Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Logs are st ored for at least one year For ease of log file storage y ou can configure AWS CloudTrail to aggregate your log files across all regions and/or across multiple accounts to a single S3 bucket AWS CloudTrail provides the ability to customize your log stor age period by configuring your desired expiration period(s) on log files written to your Amazon S3 bucket You control the retention policies for your CloudTrail log files You can retain log files for a time period of your choice or indefinitely By defa ult log files are stored indefinitely You can also move your log file data to Amazon Glacier for additional cost savings associated with cold storage Learn more Store logs for an organization defined period of time Store logs real time for resiliency AWS CloudTrail provides you with log file resiliency by leveraging Amazon S3 a highly durable storage infrastructure Amazon S3’s standard storage is designed for 99999999999% durability and 9999 % availability of objects over a given year Learn more Generate Customized Reporting of Log Data From an operational and security perspective API call logging provides the data and context required to analyze user behavior and understand certain events API calls and IT resource change logs can also be used to demonstrate that only authorized users have performed certain tasks in your environment in alignment with compliance requirements However given the volume and variability associated with logs from different systems it can be challenging in an onpremise environment to gain a clear understanding of the activities users have performed and the changes made to your IT resources AWS CloudTrail produces data you can use to detect abnormal behavior retrieve event activities associated with specific objects or provide a simple audit trail for your account You can evolve your current logging analytics by using the 25+ different fields in the event data that AWS CloudTrail provides to build queries and create customized reports focused on internal investigations external compliance etc AWS CloudTrail enables you to monitor API calls for specific known undesired behavior(s) and raise alarms using your log management or security incident and event management (SIEM) solutions The enriched data provided by AWS CloudTrail can accelerate your investigation time and decrease your incident response time Additionally data provided by AWS CloudTrail may enable you to perform a deep er security analysis on API calls to identify suspicious behavior and latent patterns that don’t trigger immediate alarms but which may represent a ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 8 of 16 security issue Finally AWS CloudTrail works with an extensive range of partners with ready torun solutions for security analytics and alerting Learn more about our partner solutions on the AWS CloudTrail website Generate Customized Reporting of Log Data Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Log individual user access to resources by system accessed and actions taken “Individual user access” includes access by system administrators and system operators ; “Resour ces” includes audit trail logs AWS CloudTrail provides the ability to generate comprehensive and detailed API call reports by logging activities performed by all users who access your logged AWS resources including root IAM users federated users and any users or services performing activities on behalf of users using any access method Learn more Produce logs at an organization defined frequency AWS CloudTrail p rovides the ability to use log anal ysis tools to retrieve log file data at customized frequencie s by creating logs in near realtime and generally deliver ing the log data to your Amazon S3 bucket within 15 minutes of the API call You can use the log files as an input into industry leading log management and analysis solutions to perform analytics Learn more Provide a log of when logging activity was initiated AWS CloudTrail logs all API calls including enabling and disabling AWS Clou dTrail logging This allows you to track when CloudTrail itself was turned on or off Learn more Generate logs synched to a single internal system clock to provide consistent time stamp information AWS CloudTrail p roduces log data from a single internal system clock by generating event time stamps in Coordinated Universal Time (UTC) consistent with the ISO 8601 Basic Time and date format standard Learn more Provide logs that can show if inappropriate or unusual activity has occurred AWS CloudTrail enables you to monitor API calls by recording authorization failures in your AWS account allowing you to track attempted access to restricted resources or other unusual activity Learn more Provide logs with adequate event details AWS CloudTrail delivers API calls with detailed information such as type data and time location source/origin outcome (including exceptions faults and security event information) affected resource (data system etc) and associated user AWS CloudTrail can help you identify the user time of the event IP address of the user request parameters provided by the user re sponse elements returned by the service and optional error code and error message Learn more Conclusion You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions AWS CloudTrail provides a simple solution to log user activity that helps alleviate the burden of running a complex logging system Another benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more visibility security control and central ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 9 of 16 automation AWS CloudTrail is one of the services you can use to achieve a high level of governance of your IT resources using AWS Addition al Resources Below are links in response to commonly asked questions related to logging in AWS: What can I do with AWS? Learn more How can I get started with AWS? Learn more How can I get started with AWS CloudTrail? Learn more Does AWS CloudTrail have a list of FAQs? Learn more How can I achieve compliance while using AWS? Learn more How can I prepare for an audit while using AWS? Learn more This document is provided for informational purposes only It represents AWS’s curr ent product offerings as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 10 of 16 Appendix: Compliance Program Index The information in the whitepaper above was presented by logging requirement domains For your reference the logging requirements by common compliance frameworks are listed in the table below: AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Secur ity Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 52: Ensure that all anti virus mechanisms are current actively running and generating audit logs PCI 101: Establish a process for linking all access to system components (especially access done with adm inistrative privileges such as root) to each individual user PCI 102: Implement automated audit trails for all system components to reconstruct the following events: 1021: All individual accesses to cardholder data 1022: All actions taken by any in dividual with root or administrative privileges 1023: Access to all audit trails 1024: Invalid logical access attempts 1025: Use of identification and authentication mechanisms 1026: Initialization of the audit logs 1027: Creation and deletion of system level objects PCI 103: Record at least the following audit trail entries for all system components for each event: 1031: User identification 1032: Type of event 1033: Date and time 1034: Success or failure indication 1035: Origination of the event 1036: Identity or name of affected data system component or resource PCI 1042: Time data is protected PCI 105: Secure audit trails so they cannot be altered PCI 1051: Limit viewing of audit trails to those with a job related need PCI 1052: Protect audit trail files from unauthorized modifications PCI 1053: Promptly back up audit trail files to a centralized log server or media that is difficult to alter ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 11 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 1054: Write logs for external facing technologies onto a log server on the internal LAN PCI 1055: Use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert) PCI 106: Review logs for all system components at least daily Log reviews must include those servers that perform security functions like intrusion detection system (IDS) and aut hentication authorization and accounting protocol (AAA) servers (for example RADIUS) PCI 107: Retain audit trail history for at least one year with a minimum of three months immediately available for analysis (for example online archived or rest orable from back up) PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critical system files configuration files or content files; and configure the software to perform critical file comparisons at lea st weekly PCI 122: Develop daily operational security procedures that are consistent with requirements in this specification (for example user account maintenance procedures and log review procedures) PCI A12d: Restrict each entity’s access and privileges to its own cardholder data environment only PCI A13: Ensure logging and audit trails are enabled and unique to each entity’s cardholder data environment and consistent with PCI DSS Requirement 10 PCI 114: Use intrusion detection system s and/or intrusion prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment and alert personnel to suspected compromises Keep all intrusion dete ction and prevention engines baselines and signatures uptodate ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 12 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for s toring processing and transmitting credit card information in the cloud Learn more PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critic al system files configuration files or content files; and configure the software to perform critical file comparisons at least weekly Service Organization Controls 2 (SOC 2 ) The SOC 2 report is an attestation report that expands the evaluation of cont rols to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Security 32g: Procedures exist to restrict logical access to the defined system including but not limited to the fol lowing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Security 33: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other system components such as firewalls routers and servers SOC 2 Security 37: Procedures exist to identify report and act upon system security breaches and other incidents SOC 2 Availability 35f: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master pass words powerful utilities and security devices (for example firewalls) SOC 2 Availability 36: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other sys tem components such as firewalls routers a nd servers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 13 of 16 AWS Compliance Program Compliance Requirement SOC 2 Availability 310: Procedures exist to identify report and act upon system availability issues and related security breaches and other incidents Service Organization Controls 2 (SOC 2) The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Confidentiality 33: The system procedures related to confidentiality of data processing a re consistent with the documented confidentiality policies SOC 2 Confidentiality 381: Procedures exist to restrict logical access to the system and the confidential information resources maintained in the system including but not limited to the foll owing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Confidentiality 313: Procedures exist to identify report and act upon s ystem confidentiality and security breaches and other incidents SOC 2 Confidentiality 42: There is a process to identify and address potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its system confident iality and related security policies SOC 2 Integrity 36g: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Integrity 41: System processing integrity and security performance are periodically re viewed and compared with the defined system processing integrity and related security policies SOC 2 Integrity 42: There is a process to identify and ad dress potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its defined system processing integrity and related security policies ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 14 of 16 AWS Compliance Program Compliance Requirement International Organization for Standardization (ISO) 27001 ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and custom er information that’s based on periodic risk assessments Learn more Due to copyright laws AWS cannot provide the requirement descriptions for ISO 27001 You may purchase a copy of the ISO 27001 standard online from various sources including ISOorg Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized a pproach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 2: The organization: a Determines based on a risk assessment and mission/business needs that the information system must be capable of auditing the following events: [Assignment: organization defined list of auditable events]; b Coordinates the security audit function wit h other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of auditable events; c Provides a rationale for why the list of auditable events are deemed to be adequate to support after thefact investigations of security incidents; and d Determines based on current threat information and ongoing assessment of risk that the following events are to be audited within the information system: [Assignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 4 AU 2: The organization: a Determines that the information system must be capable of audit ing the following events: [Assignment: organization defined auditable events]; b Coordinates the security audit function with other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of au ditable events; c Provides a rationale for why the auditable events are deemed to be adequate to support after the fact investigations of security incidents; and d Determines that the following events are to be audited within the information system: [Ass ignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 3 AU 3: The information system produces audit records that contain sufficient information to at a minimum establish what type of event occurred when (date and time) the event occurred where the event occurred the source of the event the outcome (success or failure) of the event and the identity of any user/subject associated with the event ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 15 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 3: The information system produces audit records containing information that at a minimum establishes what type of event occurred when the event occurred where the event occ urred the source of the event the outcome of the event and the identity of any user or subject associated with the event FedRAMP NIST 800 53 Rev 3 AU 4: The organization allocates audit record storage capacity and configures auditing to reduce the likelihood of such capacity being exceeded FedRAMP NIST 800 53 Rev 4 AU 4: The organization allocates audit record storage capacity in accordance with [Assignment: organization defined audit record storage requirements] Federal Risk and Authorization Ma nagement Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 5: The information system: a Alerts designated organizational officials in the event of an audit processing failure; and b Takes the following additional actions: [Assignment: orga nization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NIST 800 53 Rev 4 AU 5: The information system: a Alerts [Assignment: organization defined personnel] in t he event of an audit processing failure; and b Takes the following additional actions: [Assignment: organization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NI ST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of inappropriate or unusual activity and reports findings to designated organizational officials; and b Adjusts the level of audit review analysis and reporting within the information system when there is a change in risk to organizational operations organizational assets individuals other organizations or the Nation based on law enforcement in formation intelligence information or other credible sources of information FedRAMP NIST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of [Ass ignment: organization defined inappropriate or unusual activity]; and b Reports findings to [Assignment: organization defined personnel or roles] FedRAMP NIST 800 53 Rev 3 AU 8: The information system uses internal system clocks to generate time stamps for audit records ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 16 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 8: The information system: a Uses internal system clocks to generate time stamps for audit records; and b Generates time in the time stamps that can be mapped to Coordinated Universal Time (UTC) or Green wich Mean Time (GMT) and meets [Assignment: organization defined granularity of time measurement] FedRAMP NIST 800 53 Rev 3 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion FedRAMP NIST 800 53 Rev 4 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that pr ovides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 10: The information system protects against an individual fal sely denying having performed a particular action FedRAMP NIST 800 53 Rev 4 AU 10: The information system protects against an individual (or process acting on behalf of an individual) falsely denying having performed [Assignment: organization defined ac tions to be covered by non repudiation] FedRAMP NIST 800 53 Rev 3 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records retention policy] to provide support for after thefact investigati ons of security incidents and to meet regulatory and organizational information retention requirements FedRAMP NIST 800 53 Rev 4 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records rete ntion policy] to provide support for after thefact investigations of security incidents and to meet regulatory and organizational information retention requirements
|
General
|
consultant
|
Best Practices
|
Security_of_AWS_CloudHSM_Backups
|
Security of AWS CloudHSM Backups Fully Managed Hardware Security Modules (HSMs) in the AWS Cloud First published December 2017 Updated March 24 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Abstract v Introduction 1 AWS CloudHSM: Managed by AWS controlled by you 1 High availability 2 CloudHSM cluster backups 3 Creating a backup 3 Archiving a back up 4 Restoring a backup 4 Security of backups 5 Key hierarchy 6 Restoration of backups 7 Conclusion 7 Contributors 8 Further reading 8 Document revisions 9 Abstract AWS CloudHSM clusters provide high availability and redundancy by distributing cryptographic operations across all hardware security modules (HSMs ) in the cluster Backup and restore is the mechanism by which a new HSM in a cluster is synchronized This whitepaper provides details on the cryptographic mechanisms supporting backup and restore functionality and the security mechanisms protecting the Amazon Web Services ( AWS )managed backups This whitepaper also provides in depth information on how ba ckups are protected in all three phases of the CloudHSM backup lifecycle process : Creation Archive and Restore For the purposes of this whitepaper it is assume d that you have a basic understanding of AWS CloudHSM and cluster architecture Amazon Web Services – Security of AWS CloudHSM Backups Page 1 Introduction AWS offers two options for securing cryptographic keys in the AWS Cloud: AWS Key Management Service (AWS KMS) and AWS CloudHSM AWS KMS is a managed service that uses hardware security modules (HSMs) to protect the security of your encryption keys AWS CloudHSM delivers fully managed HSMs in the AWS Cloud which allows you to add secure validated key storage and high performance crypto acceleration to your AWS applications CloudHSM offers you the option of single tenant access and control over your HSMs CloudHSM is bas ed on Federal Information Processing Standards (FIPS ) 1402 Level 3 validated hardware CloudHSM delivers fully managed HSM s in the AWS Cloud CloudHSM delivers all the benefits of traditional HSMs including secure generation storage and management of cryptographic keys used for data encryption that are controlled and accessible only by you As a managed service it also automates time consuming administrative tasks suc h as hardware provisioning software patching high availability and backups HSM capacity can be scaled quickly by adding and removing HSMs from your cluster on demand The b ackup and restore functionality of CloudHSM is what enables scalability reliabi lity and high availability in CloudHSM A key aspect of the backup and restore feature is a secure backup protocol that CloudHSM uses to back up your cluster This paper takes an in depth look at the security mechanism s in place around this feature AWS Cloud HSM: Managed by AWS controlled by you AWS CloudHSM provides HSMs in a cluster A cluster is a collection of individual HSMs that AWS CloudHSM keeps in sync You can think of a cluster as one logical HSM When you perform a key generation task or operation on one HSM in a cluster the other HSMs in that cluster are automatically kept up to date Each HSM in a cluster is a single tenant HSM under your control At the hardware level each HSM includes hardware enforced isolation of crypto operations and key storage Each HSM runs on dedicated cryptographic cores Amazon Web Services – Security of AWS CloudHSM Backups Page 2 Each HSM appears as a network resource in your virtual private cloud (VPC) AWS manage s the HSM on your behalf performing functions such as health checks backups and synchronizati on of HSMs within a cluster However you alone control the user accounts passwords login policies key rotation procedures and all aspects of configuring and using the HSM s The implication of this control is that your cryptographic data is secure from external compromise This is important to financial applications subject to PCI regulations healthcare applications subject to HIPAA regulations and streaming video solutions subject to contractual DRM requirements You interact with the HSM s in a cluster via the AWS CloudHSM client Communication occurs over a n endtoend encrypted channel AWS does not have visibility into your communication with your HSM which occurs within thi s endtoend encrypted channel High availability Historically d eploying and maintaining traditional HSMs in a high availability configuration has been a manual process that is cumbersome and expensive CloudHSM makes scalability and high availability simple without compromising security When you use CloudHSM you begin by creat ing a cluster in a particular AWS Region A cluster can contain multiple individual HSM s For idle workloads you can delete all HSMs and simply retain the empty cluster For production workloads you should have at least two HSMs spread across multiple Availability Zones CloudHSM automatically synchronize s and load balance s the HSM s within a cluster The CloudHSM client loadbalances cryptographic operations acro ss all HSMs in the cluster based on the capacity of each HSM for additional processing If a cluster requires additional throughput y ou can expand your cluster by adding more HSMs through a single API call or a click in the CloudHSM console When you expa nd a cluster CloudHSM automatically provisions a new HSM as a clone of the other HSMs in the cluster This is done by taking a backup of an existing HSM and restoring it to the newly added HSM When you delete an HSM from a cluster a backup is automatica lly taken This way when you create a new HSM later you can pick up where you left off Should an HSM fail for any reason the service will automatically replace it with a new healthy Amazon Web Services – Security of AWS CloudHSM Backups Page 3 HSM This HSM is restored from a backup of an other HSMs in the cluste r if available Otherwise the new HSM is restored from the last available backup taken for the cluster When you don't need to use a cluster any more you can delete all its HSMs as well as the cluster Later when you need to use the HSM s again you can create a new cluster from the backup effectively restoring your previous HSM In the next section we will take a deeper look at the contents of the backup and the security mechanisms used to protect it CloudHSM cluster backups Backups are initiated archived and restored by CloudHSM A backup is a complete encrypted snapshot of the HSM E ach AWS managed backup contains the entire contents of the HSM including keys certificates users policies quorum settings and configuration options This includes: • Certificates on the HSM including the cluster certificate • All HSM users (COs CUs and AU) • All key material on the HSM • HSM configuration s and policies Backups are stored in Amazon Simple Storage Se rvice (Amazon S3) within the same Region as the cluster You can view backups available for your cluster from the CloudHSM console Backups can only be restored to a genuine HSM running in the AWS Cloud The restored HSM retain s all the configurations and policies you put in place on the original HSM Creating a backup CloudHSM triggers backups in the following scenarios: • CloudHSM automatically backs up you r HSM cluster s periodically • When add ing an HSM to a cluster CloudHSM takes a backup from an active HSM in that cluster and restores it to the newly provisioned HSM • When delet ing an HSM from a cluster CloudHSM takes a backup of the HSM before deleting it Amazon Web Services – Security of AWS CloudHSM Backups Page 4 A backup is a unified encrypted object combining certificates users k eys and policies It is created and encrypted as a single tightly bound object The individual components are not separable from each other The key used to encrypt the backup is derived using a combination of persistent and ephemeral secret keys Backup s are encrypted and decrypted within your HSM only and can only be restored to a genuine HSM running within the AWS Cloud This is discussed in further detail in the Security of Backups: Restor ation of Backup s section of this document CloudHSM uses FIPS 140 2 level 3 validated HSMs Your cryptographic material is never accessible in the clear outsi de the hardware Archiving a backup CloudHSM stores the cluster backups in a service controlled Amazon S3 location in the same AWS Region as your cluster The following figure illustrates an encrypted backup of an HSM cluster in a service controlled Amazon S3 bucket Encrypted backup of an HSM cluster in a service controlled S3 bucket Restoring a backup Backups are used in two scenarios : • When you provision a new cluster using an existing backup • When a second (or subsequent) HSM is added to a cluster or when CloudHSM automatically replaces an unhealthy HSM Amazon Web Services – Security of AWS CloudHSM Backups Page 5 In both scenarios the backu p is restored to a newly created HSM During restoration the backup is decrypted within an HSM using the process described in the next section The decryption relies on a set of keys available only within an authentic hardware instance from the original manufacturer installed in the AWS Cloud Therefore CloudHSM can restore backups onto only authentic HSMs within the AWS Cloud Recall that each backup contains all users keys access policies and configuration from the original HSM Therefore the rest ored HSM contains the same protections and access controls as the original and is equivalently secure to the original When your application or cryptographic officer seeks to use the HSM you can verify that the HSM is a clone of the one you originally established a trust relationship with You do so by confirming that the cluster certificate is signed using the same key you used when initially claiming the HSM This ensures that you are talking to your HSM Note that while CloudHSM manages backups the service does not have any access to the data cryptographic material user information and the keys encapsulated within the backup Specifically AWS has no way to recover your keys if you lose your access credentials to log in to the HSM Security of backups The CloudHSM backup mechanism has been validated under FIPS 140 2 Level 3 A backup taken by an HSM configured in FIPS mode cannot be restored to an HSM that is not also in FIPS mode Operation in FIPS mode is a required configuration for CloudHSM An HSM in FIPS mode is running production firmware provided by the manufacturer and signed with a FIPS production key This ensures other parties cannot forge the firmware Further more each backup contains a complete copy of everything in the HSM Specifically each AWS managed backup contains the entire contents of the HSM including keys claiming certificates users policies quorum settings and configuration options Accordin gly you can demonstrate – for example during a compliance audit that each HSM with a restored backup is protected at exactly the same level and with the same policies and controls as when the backup was first created Amazon Web Services – Security of AWS CloudHSM Backups Page 6 Key hierarchy As discussed earlier a backup is encrypted within the HSM before it is provided to CloudHSM for archival The backup is encrypted using a backup encryption key described in the following section The backup of the HSM is encrypted using a backup encryption key (BEK) Manufacturer’s key backup key (MKBK) The manufacturer’s key backup key ( MKBK ) exists in the HSM hardware provided by the manufacturer This key is common to all HSM s provided by the manufacturer to AWS The MKBK cannot be accessed or used by any user or for any purpose other than the generation of the backup encryption key Specifically AWS does not have access to or visibility into the MKBK AWS key backup key (AKBK) The AWS key backup key ( AKBK ) is securely installed by the CloudHSM service when the hard ware is placed into operation within the CloudHSM fleet This key is unique to hardware installed by AWS within our CloudHSM infrastructure The AKBK is generated securely within an offline FIPS compliant hardware security module and loaded under twoperson control into newly commissioned CloudHSM hardware Amazon Web Services – Security of AWS CloudHSM Backups Page 7 Backup Encryption Key (BEK ) The backup of the HSM is encrypted using a backup encryption key (BEK) The BEK is an AES 256 key that is generated within the HSM when a backup is requested The HSM uses the BEK to encrypt its backup The encrypted backup includes a wrapped copy of the BEK The BE K is wrapped with an AES 256 bit wrapping key using a FIPS appro ved AES key wrapping method This method complie s with NIST Special Publication 800 38F The wrapping key is derived from the MKBK and AKBK via a key derivation function (KDF ) This same wrapping key must be derived again to recover the B EK prior to decrypting the backup This implies that both the MKBK and AKBK are required to decrypt a customer backup Put another way the BEK cannot be discovered or derived using a secret managed by AWS or by the manufacturer alone Once encrypted the backup is ready to be archived Rec all that each backup is stored on Amazon S3 Restoration of backups CloudHSM backups can only be decrypted by an HSM that is able to derive the same wrapping key used to secure the BEK when the backup was created Recall that this wrapping key is derived f rom the Manufacturer’s Key Backup Key (MKBK) and the AWS Key Backup Key (A KBK) The MKBK is only embedded in genuine hardware by the manufacturer and the AKBK is only installed on genuine hardware within the AWS fleet Therefore the BEK cannot be unwrapp ed outside of an AWS managed HSM This in turn implies that the backup cannot be decrypted outside of an AWS managed HSM Conclusion AWS CloudHSM provides a secure FIPS validated HSM backup and restore mechanism that enables highavailability and failure management capabilities without sacrificing security or privacy You retain complete control over your HSM and the data within Backups are encrypted strongly at creation stored securely and never decrypted outside an HSM Backups can only be restored to genuine hardware in the AWS Cloud running firmware signed with a FIPS production key As backups include user accounts and security policy configurations in addition to cryptographic material restored HSMs retain all the security policies and controls from the original HSM With CloudHSM you can demonstrate – for example during a compliance audit – that a n HSM restored Amazon Web Services – Security of AWS CloudHSM Backups Page 8 from backup is protected at exactly the same level and with the same policies and controls as the HSM from which the backup was originally created Contributors The following individuals and organizations contributed to this document: • Ben Grubin General Manager AWS Cryptography • Balaji Iyer Senior Professional Services Consultant AWS • Avni Rambhia Senior Pr oduct Manager AWS Cryptography Further reading • CloudHSM documentation : https://awsamazoncom/documentation/cloudhsm/ • CloudHSM product details: https://awsamazoncom/cloudhsm/details/ • Blog – “Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads ”: https://awsamazoncom/blogs/aws/aws cloudhsm update costeffective hardware keymanagement/ • Webinar – “Secure Scalable Key Storage in AWS ”: https ://wwwyoutubecom/watch?v=hEVks207ALM • Verify the Identity and Authenticity of Your Cluster’s HSM: http://docsawsamazoncom/cloudhsm/latest/userguide/verify hsm identityhtml • AWS CloudHSM Clie nt Tools and Software Libraries: http://docsawsamazoncom/cloudhsm/latest/userguide/client tools and librar ieshtml#client Amazon Web Services – Security of AWS CloudHSM Backups Page 9 Document revisions Date Description March 24 2021 Reviewed for technical accuracy December 2017 First publication
|
General
|
consultant
|
Best Practices
|
Security_Overview_of_AWS_Lambda
|
Security Overview of AWS Lambda An In Depth Look at AWS Lambda Security January 2021 This paper has been archived For the latest version of this document see: https://docsawsamazoncom/whitepapers/latest/ securityoverviewawslambda/welcomehtml ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor do es it modify any agreement between AWS and its customers © 202 1 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Abstract v Introduction 1 About AWS Lambda 1 Benefits of Lambda 2 Cost for Running Lambda Based Applications 3 The Shared Responsibility Model 3 Lambda Functions and Layers 4 Lambda Invoke Modes 5 Lambda Executi ons 6 Lambda Execution Environments 6 Execution Role 8 Lambda MicroVMs and Workers 8 Lambda Isolation Technologies 10 Storage and State 11 Runtime Maintenance in Lambda 11 Monitoring and Auditing Lambda Functions 12 Amazon CloudWatch 12 AWS CloudTrail 13 AWS X Ray 13 AWS Config 13 Architecting and Opera ting Lambda Functions 13 Lambda and Compliance 14 Lambda Event Sources 14 Conclusion 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract This whitepaper presents a deep dive into the AWS Lambda service through a security lens It provides a well rounded picture of the service which is useful for new adopters and deepens understanding of Lambda for current users The intended audience for this whitepaper is Chief Information Security Officers (CISOs) information security groups security engineers enterprise architects compliance teams and any others interested in understanding the underpinnings of AWS Lambda ArchivedAmazon Web Services Security Overview of AWS Lambda Page 1 Introduction Today more workloads use AWS Lambda to achieve scalability performance and cost efficiency without managing the underlying computing These workloads scale to thousands of concurrent requests per second Lambda is used by hundreds of thousands of Amazon Web Services (AWS) customers to serve trillions of requests every month Lambda is suitable for mission critical applications in many industries A broad variety of customers from media and entertainment to financial services and other regulated industries take advantage of Lambda These customers decrease time to market optimize costs and improve agility by focusing on what they do best: running their business The managed runtime environment model enables Lambda to manage much of the implementation details of running serverless workloads This model further reduces the attack surface while making cloud security simpler This whitepaper presents the underpinnings of that model along with best practices to developers security analysts security and compliance teams and other stakeholders About AWS Lambda Lambda is an event driven serverless compute service that extends other AWS services with custom logic or creates backend services that operate with scale performance and security in mind Lambda can be configured to automatically run code in response to multiple events such as HTTP requests through Amazon API Gateway modifications to objects in Amazon Simple Storage Service (Amazon S3) buckets table updates in Amazon DynamoDB and state transitions in AWS Step Functions Lambda runs cod e on a highly available compute infrastructure and performs all the administration of the underlying platform including server and operating system maintenance capacity provisioning and automatic scaling patching code monitoring and logging With Lamb da you can just upload your code and configure when to invoke it; Lambda takes care of everything else required to run your code Lambda integrates with many other AWS services and enables you to create serverless applications or backend services rangin g from periodically triggered simple automation tasks to full fledged microservices applications Lambda can be configured to access resources within your Amazon Virtual Private Cloud (Amazon VPC) and by extens ion your on premises resources Lambda integrates with AWS Identity and Access Management (IAM) which you can leverage to protect your data and configure fine grained access controls using a variety ArchivedAmazon Web Services Security Overview of AWS Lambda Page 2 of access m anagement strategies while maintaining a high level of security and auditing to help you meet your compliance needs Benefits of Lambda Customers who want to unleash the creativity and speed of their development organizations without compromising their IT team’s ability to provide a scalable cost effective and manageable infrastructure find that Lambda lets them trade operational complexity for agility and better pricing without compromising on scale or reliability Lambda offers many benefits includi ng the following: No Servers to Manage Lambda runs your code on highly available fault tolerant infrastructure spread across multiple Availabi lity Zones (AZs) in a single Region seamlessly deploying code and providing all the administration maintenance and patches of the infrastructure Lambda also provides built in logging and monitoring including integration with Amazon CloudWatch CloudWatch Logs and AWS CloudTrail Continuous Scaling Lambda precisely manages scaling of your functions (or application) by running event triggered code in parallel and processing each event individually Millisecond Meterin g With Lambda you are charged for every 1 millisecond (ms) your code executes and the number of times your code is triggered You pay for consistent throughput or execution duration instead of by server unit Increases Innovation Lambda frees up your pr ogramming resources by taking over the infrastructure management allowing you to focus on innovation and development of business logic Modernize your Applications Lambda enables you to use functions with pre trained machine learning models to inject artificial intelligence into applications easily A single application programming interface (API) request can classify images analyze videos convert speech to text perform natural language processing and more ArchivedAmazon Web Services Security Overview of AWS Lambda Page 3 Rich Ecosystem Lambda supports developers through AWS Serverless Application Repository for discovering deploying and publishing serverless applications AWS Serverless Application Model for building serverless applications and integrations with various integrated development environments (IDEs) like AWS Cloud9 AWS Toolkit for Visual Studio AWS Tools for Visual Studio Team Services and several others Lambda is integrated with additional AWS services to provide you a rich ecosystem for building serverless applications Cost for Running Lambda Based Applications Lambda offers a granular payasyougo pricing model With this model you are charged based on the number of function invocations and their duration (the time it takes for the code t o run) In addition to this flexible pricing model Lambda also offers 1 million perpetually free requests per month which enables many customers to automate their process without any costs The Shared Responsibility Model At AWS security and compliance is a shared responsibility between AWS and the customer This shared responsibility model can help relieve your operational burden as AWS operates manages and controls the co mponents from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates For Lambda AWS manages the underlying infrastructure and application platform the operating system and the e xecution environment You are responsible for the security of your code and identity and access management (IAM) to the Lambda service and within your function Figure 1 shows the shared responsibility model as it applies to the common and distinct components of Lambda AWS responsibilities appear in orange and customer responsibilities appear in blue ArchivedAmazon Web Services Security Overview of AWS Lambda Page 4 Figure 1 – Shared Responsibility Model for AWS Lambda Lambda Functions and Layers With Lambda you can run code virtually with zero administration of the underlying infrastructure You are responsible only for the code that you provide Lambda and the configuration of how Lambda runs that code on your behalf Today Lambda supports two types of code resources: Functions and Layers A function is a resource which can be invoked to run your code in Lambda Functions can include a common or shared resource called Layers Layers can be used to share common code or data across different functions or AWS accounts You are respons ible for the management of all the code contained within your functions or layers When Lambda receives the function or layer code from a customer Lambda protects access to it by encrypting it at rest using AWS Key Management Service (AWS KMS) and in transit by using TLS 12+ You can manage access to your functions and layers through AWS IAM policies or through resource based permissions For a full list of supported IAM features on Lambda see AWS Services that work with IAM You can also control the entire lifecycle of your functions and layers through Lambda's control plane APIs For example you ca n choose to delete your function by calling DeleteFunction or revoke permissions from another account by calling RemovePermission ArchivedAmazon Web Services Security Overview of AWS Lambda Page 5 Lambda Invoke Modes The Invoke API can be calle d in two modes: event mode and request response mode • Event mode queues the payload for an asynchronous invocation • Request response mode synchronously invokes the function with the provided payload and returns a response immediately In both cases the function execution is always performed in a Lambda execution environment but the payload takes different paths For more information see Lamb da Execution Environments in this document You can also use other AWS services that perform invocations on your behalf Which invoke mode is used depends on which AWS service you are using and how it i s configured For additional information on how other AWS services integrate with Lambda see Using AWS Lambda with other services When Lambda receives a request response i nvoke it is passed to the invoke service directly If the invoke service is unavailable callers may temporarily queue the payload client side to retry the invocation a set number of times If the invoke service receives the payload the service then atte mpts to identify an available execution environment for the request and passes the payload to that execution environment to complete the invocation If no existing or appropriate execution environments exist one will be dynamically created in response to the request While in transit invoke payloads sent to the invoke service are secured with TLS 12+ Traffic within the Lambda service (from the load balancer down) passes through an isolated internal virtual private cloud (VPC) owned by the Lambda servi ce within the AWS Region to which the request was sent Figure 2 – Invocation model for AWS Lambda: request response Event invocation mode payloads are always queued for processing before invocation All payloads are queued for processing in an Amazon Simple Queue Service (Amazon SQS) queue Queued events are always secured in transit with TLS 12+ but they are not currently encrypted at rest The Amazon SQS queues used by Lambda are managed by the Lambda service and are not visible to you as a customer Queued events can be stored in a shared queue but may be migrated or assigned to dedicated queues depending on a number of factors that cannot be directly controlled by customers (for example rate of invoke size of events and so on) ArchivedAmazon Web Services Security Overview of AWS Lambda Page 6 Queued events are retrieved in batches by Lambda’s poller fleet The poller fleet is a group of EC2 instances whose purpose is to process queued event invocations which have not yet been processed Whe n the poller fleet retrieves a queued event that it needs to process it does so by passing it to the invoke service just like a customer would in a request response mode invoke If the invocation cannot be performed the poller fleet will temporarily sto re the event in memory on the host until it is either able to successfully complete the execution or until the number of run retry attempts have been exceeded No payload data is ever written to disk on the poller fleet itself The polling fleet can be tasked across AWS customers allowing for the shortest time to invocation For more information about which services may take the event invocation mode see Using AWS Lambda with other services Lambda Executions When Lambda executes a function on your behalf it manages both provisioning and configuring the underlying systems necessary to run your code This enables your developers to focus on business logic and writing code not administering and managing underlying systems The Lambda service is split into the control plane and the data plane Each plane serves a distinct purpose in the service The control plane provides the management APIs (for example CreateFunction UpdateFunctionCode PublishLayerVersion and so on) and manages integrations with all AWS services Communications to Lambda's control plane are protected in transit by TLS All customer data stored within Lambda's control plane is encrypted at rest through the use of AWS KMS which is designed to protect it from unauthorized disclosure or tampering The data plane is Lambda's Invoke API that triggers the invocation of Lambda functions When a Lambda function is invoked the data pla ne allocates an execution environment on an AWS Lambda Worker (or simply Worker a type of Amazon EC2 instance) to that function version or chooses an existing execution environment that has already been set up for that function version which it then uses to complete the invocation For more information see the AWS Lambda MicroVMs and Workers section of this document Lambda Execution Environments Each invocation is routed by Lambda's invoke service to an execution environment on a Worker that is able to service the request Other than through data plane customers and other users cannot directly initiate inbound/ingress network communications with an execution environment This helps to ensure that communications to your execution environment are authenticated and authorized ArchivedAmazon Web Services Security Overview of AWS Lambda Page 7 Execution environments are reserved for a specific function version and cannot be reused across function versions functions or AWS accounts This means a single function which may have two different versions would result in at least two unique execution environments Each execution environment may only be used for one concurrent invocation at a time and they may be reused across multiple invocations of the same function version for performance reasons Depending on a number of factors (for example rate of invocation function configuration and so on) one or more execution environments may exist for a given function version With this approach Lambda is able t o provide function version level isolation for its customers Lambda does not currently isolate invokes within a function version’s execution environment What this means is that one invoke may leave a state that may affect the next invoke (for example fi les written to /tmp or data in memory) If you want to ensure that one invoke cannot affect another invoke Lambda recommends that you create additional distinct functions For example you could create distinct functions for complex parsing operations whi ch are more error prone and re use functions which do not perform security sensitive operations Lambda does not currently limit the number of functions that customers can create For more information about limits see the Lambda quotas page Execution environments are continuously monitored and managed by Lambda and they may be created or destroyed for any number of reasons including but not limited to: • A new invoke arrives and no suitable execution environment exists • An internal runtime or Worker software deployment occurs • A new provisioned concurrency configuration is published • The lease time on the execution environment or the Worker is approaching or has exceeded max lifetime • Other internal workload rebalancing processes Customers can manage the number of pre provisioned execution environments that exist for a function version by configuring provisioned concurrency on their function configuration When configured to do so Lambda will create manage and ensure the configur ed number of execution environments always exist This ensures that customers have greater control over start up performance of their serverless applications at any scale Other than through a provisioned concurrency configuration customers cannot determi nistically control the number of execution environments that are created or managed by Lambda in response to invocations ArchivedAmazon Web Services Security Overview of AWS Lambda Page 8 Execution Role Each Lambda function must also be configured with an execution role which is an IAM role that is assumed by the Lambda service when performing control plane and data plane operations related to the fun ction The Lambda service assumes this role to fetch temporary security credentials which are then available as environment variables during a function’s invocation For performance reasons the Lambda service will cache these credentials and may re use them across different execution environments which use the same execution role To ensure adherence to least privilege principle Lambda recommends that each function has its own unique role and that it is configured with the minimum set of permissions it requires The Lambda service may also assume the execution role to perform certain control plane operations such as those related to creating and configuring Elastic network interfaces (ENI) for VPC functions sending logs to Amazon CloudWatch sending traces to AWS X Ray or other non invoke related operations Customers can always review and audit these use cases by reviewing audit logs in AWS CloudTrail For more information on this subject see the AWS Lambda execution role documentation page Lambda MicroVMs and Workers Lambda will create its execution environments on a fleet of EC2 instances calle d AWS Lambda Workers Workers are bare metal EC2 Nitro instances which are launched and managed by Lambda in a separate isolated AWS account which is not visible to customers Workers have one or more hardware virtualized Micro Virtual Machines (MVM) created by Firecracker Firecracker is an open source Virtual Machine Monitor (VMM) that uses Linux’s Kernel based Virtual Machine (KVM) to create and manage MVMs It is purpose built for creating and managing secure multi tenant container and function based services that provide serverless operational models For more information about Firecracker's security mode l see the Firecracker project website As a part of the shared responsibility model Lambda is responsible for maintaining the security configuration controls and patching level of the Workers The Lambda team uses AWS Inspector to discover known potential security issues as well as other custom security issue notification mechanisms and pre disclosure lists so that customers don’t need to manag e the underlying security posture of their execution environment ArchivedAmazon Web Services Security O verview of AWS Lambda Page 9 Figure 3 – Isolation model for AWS Lambda Workers Workers have a maximum lease lifetime of 14 hours When a Worker approaches maximum lease time no further invocations are routed to it MVMs are gracefully terminated and the underlying Worker instance is terminated Lambda continuously monitors and alarms on lifecycle activities of its fleet’s lifetime All data plane communications to workers are encrypted using Advanced Encryption Standard with Galois/Counter Mode (AES GCM) Other than through data plane operations customers cannot directly interact with a worker as it hosted in a network isolated Amazon VPC managed by Lambda in Lambda’s service accounts When a Worker needs to create a new execution environment it is given time limited authorization to access customer function artifacts These artifacts are specifically optimized for Lambda’s execution environment and workers Function code which is uploa ded using the ZIP format is optimized once and then is stored in an encrypted format using an AWS managed key and AESGCM Functions uploaded to Lambda using the container image format are also optimized The container image is first downloaded from its o riginal source optimized into distinct chunks and then stored as encrypted chunks using an authenticated convergent encryption method which uses a combination of AES CTR AES GCM and a SHA256 MAC The convergent encryption method allows Lambda to securely deduplicate encrypted chunks All keys required to decrypt customer data is protected using customer mana ged KMS Customer Master Key (CMK) CMK usage by the Lambda service is available to customers in AWS Clou dTrail logs for tracking and auditing ArchivedAmazon Web Services Security Overview of AWS Lambda Page 10 Lambda Isolation Technologies Lambda uses a variety of open source and proprietary isolation technologies to protect Workers and execution environments Each execution environment contains a dedicated copy of the foll owing items: • The code of the particular function version • Any AWS Lambda Layers selected for your function version • The chosen function runtime (for example Java 11 Node JS 12 Python 38 and so on) or the function's custom runtime • A writeable /tmp directory • A minimal Linux user space based on Amazon Linux 2 Execution environments are isolated from one another using several container like technologies built into the Linux kernel along with AWS proprietary isolation technologies These technolog ies include: • cgroups – Used to constrain the function's access to CPU and memor y • namespaces – Each execution environment runs in a dedicated name space We do this by having unique group process IDs user IDs network interfaces and other resources managed by the Linux kernel • seccomp bpf – To limit the system calls (syscalls) that can be used f rom within the execution environment • iptables and routing tables – To prevent ingress network communications and to isolate network connecti ons between MVMs • chroot – Provide scoped access to the underlying filesystem • Firecracker configuration – Used to rate limit block device and network device throughput • Firecracker security featur es – For more information about Firecracker's current security design please review Firecracker's latest design document Along with AWS proprietary isolation te chnologies these mechanisms provide strong isolation between execution environments ArchivedAmazon Web Services Security Overview of AWS Lambda Page 11 Storage and State Execution environments are never reused across different function versions or customers but a single environment can be reused between invocations of the same function version This means data and state can persist between invocations Data and/or state may continue to persist for hours before it is destroyed as a part of normal execution environment lifecycle management For performance reasons functi ons can take advantage of this behavior to improve efficiency by keeping and reusing local caches or long lived connections between invocations Inside an execution environment these multiple invocations are handled by a single process so any process wide state (such as a static state in Java) can be available for future invocations to reuse if the invocation occurs on a reused execution environment Each Lambda execution environment also includes a writeable filesystem available at /tmp This storage is not accessible or shared across execution environments As with the process state files written to /tmp remain for the lifetime of the execution environment This allows expensive transfer operations such as downloading machine learning (ML) models to be amortized across multiple invocations Functions that don’t want to persist data between invocations should either not write to /tmp or delete their files from /tmp between invocations The /tmp directory is backed by an EC2 instance store and is encrypted at rest Customers that want to persist data to the file system outside of the execution environment should consider using Lambda’s integration with Amazon Elastic File System (Amazon EFS) For more information see Using Amazon EFS with AWS Lambda If customers don’t want to persist data or state across invocations Lambda recommends that they do not use the execution context or execution environment to store data or state If customers want to actively prevent data or state leaking across invocations Lambda recommends that they create distinct functions for e ach state Lambda does not recommend that customers use or store security sensitive state into the execution environment as it may be mutated between invocations We recommend recalculating the state on each invocation instead Runtime Maintenance in Lamb da Lambda provides support for multiple programming languages through the use of runtimes including Java 11 Python 38 Go 1x NodeJS 12 NET core 31 and others For a complete list of currently supported runtimes see AWS Lambda Runtimes Lambda provides support for these runtimes by continuously scanning for and deploying compatible updates and security patches and by performing other runtime maintenance activity Thi s enables customers to focus on just the maintenance and security of any code included in their Function and Layer The Lambda team uses AWS Inspector to ArchivedAmazon Web Services Security Overview of AWS Lambda Page 12 discover known security issues as well as other cus tom security issues notification mechanisms and pre disclosure lists to ensure that our runtime languages and execution environment remain patched If any new patches or updates are identified Lambda tests and deploys the runtime updates without any invol vement from customers For more information about Lambda's compliance program see the Lambda and Compliance section of this document Typically no action is required to pick up the latest patches for supported La mbda runtimes but sometimes action might be required to test patches before they are deployed (for example known incompatible runtime patches) If any action is required by customers Lambda will contact them through the Personal Health Dashboard throug h the AWS account's email or through other means with the specific actions required to be taken Customers can use other programming languages in Lambda by implementing a custom runtime For custom runtimes maintenance of the runtime becomes the custome r's responsibility including making sure that the custom runtime includes the latest security patches For more information see Custom AWS Lambda runtimes in the AWS Lambda Developer Guide When upstream runtime language maintainers mark their language End OfLife (EOL) Lambda honors this by no longer supporting the runtime language version When runtime versions are marked as deprecated in Lambda Lambda stops supporting t he creation of new functions and updates to existing functions that were authored in the deprecated runtime To alert customer of upcoming runtime deprecations Lambda sends out notifications to customers of the upcoming deprecation date and what they can expect Lambda does not provide security updates technical support or hotfixes for deprecated runtimes and reserves the right to disable invocations of functions configured to run on a deprecated runtime at any time If customers want to continue to run deprecated or unsupported runtime versions they can create their own custom AWS Lambda runtime For details on when runtimes are deprecated see the AWS Lambda Runtime support policy Monitoring and Auditing Lambda Functions You can monitor and audit Lambda functions with many AWS services and methods including the following services: Amazon CloudWatch Lambda automatically monitors Lambda functions on your behalf Through Amazon CloudWatch it reports metrics such as the number of requests the execution duration per request and the number of requests resulting in an error These metrics are exposed at the function level which you can then leverage to set CloudWatch alarms ArchivedAmazon Web Services Security Overview of AWS Lambda Page 13 For a list of metrics exposed by Lambda see Working with AWS Lambda function metrics AWS CloudTrail Using AWS CloudTrail you can implement governance compliance operational auditing and risk auditing of your entire AWS account including Lambda CloudTrail enables you to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a complete event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Using CloudTrail you can optionally encrypt log files using KMS and also leverage CloudTrail log file integrity validati on for positive assertion AWS X Ray Using AWS X Ray you can analyze and debug production and distributed Lambda based applications which enables you to understand the performance of your application and its u nderlying services so you can eventually identify and troubleshoot the root cause of performance issues and errors X Ray’s end toend view of requests as they travel through your application shows a map of the application’s underlying components so you can analyze applications during development and in production AWS Config With AWS Config you can track configuration changes to the Lambda functions (including deleted functions) runtime environments tags handler name code size memory allocation timeout settings and concurrency settings along with Lambda IAM execution role subnet and security group associations This gives you a holistic view of the Lambda function’s lifecycle and enables you to sur face that data for potential audit and compliance requirements Architecting and Operating Lambda Functions Now that we have discussed the foundations of the Lambda service we move on to architecture and operations For information about standard best pra ctices for serverless applications see the Serverless Application Lens whitepaper which defines and explores the pillars of the AWS Well Architected Framework in a Serverless context • Operational Excellence Pillar – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures ArchivedAmazon Web Services Security Overview of AWS Lambda Page 14 • Security Pillar – The ability to protect information systems and assets while delivering business value through risk assessment and mitigation strategies • Reliability Pillar – The ability of a system to recover from infrastructure or service disruptions dynamically ac quire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues • Performance Efficiency Pillar – The efficient use of computing resources to meet requirements and the maintenance of that efficiency a s demand changes and technologies evolve The Serverless Application Lens whitepaper includes topics such as logging metrics and alarming throttling and limits assigning permissions to Lambda functions and making sensitive data available to Lambda functions Lambda and Compliance As mentioned in The Shared Responsibility Model section of this document you are responsible for determining which compliance regime applies to your data After you have determined your compliance regime needs you can use the various Lambda features to match those controls You can contact AWS expert s (such as solution architects domain experts technical account managers and other human resources) for assistance However AWS cannot advise customers on whether (or which) compliance regimes are applicable to a particular use case As of November 202 0 Lambda is in scope for SOC 1 SOC 2 and SOC 3 reports which are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives In addition Lambda maintains compliance with PCI DSS and the US He alth Insurance Portability and Accountability Act (HIPAA) among other compliance programs For an up todate list of compliance information see the AWS Services in Scope by Compliance P rogram page Because of the sensitive nature of some compliance reports they cannot be shared publicly For access to these reports you can sign in to your AWS console and use AWS Artifact a no cost self service portal for on demand access to AWS compliance reports Lambda Event Sources Lambda integrates with more than 140 AWS services via direct integration and the Amazon EventBridge event bus The commonly used Lambda event sources are: • Amazon API Gateway • Amazon CloudWatch Events ArchivedAmazon Web Services Security Overview of AWS Lambda Page 15 • Amazon CloudWatch Logs • Amazon Dy namoDB Streams • Amazon EventBridge • Amazon Kinesis Data Streams • Amazon S3 • Amazon SNS • Amazon SQS • AWS Step Functions With these event sources you can: • Use AWS IAM to manage access to the service and resources securely • Encrypt your data at rest1 All services encrypt data in transit • Access from your Amazon VPC using VPC endpoints (powered by AWS PrivateLink ) • Use Amazon CloudWatch to collect report and alarm on metrics • Use AWS CloudTrail to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a comple te event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Conclusion AWS Lambda offers a powerf ul toolkit for building secure and scalable applications Many of the best practices for security and compliance in Lambda are the same as in all AWS services but some are particular to Lambda This whitepaper describes the benefits of Lambda its suitabi lity for applications and the Lambda managed runtime environment It also includes information about monitoring and auditing and security and compliance best practices As you think about your next implementation consider what you learned about Lambda and how it might improve your next workload solution Contributors Contributors to this document include: • Mayank Thakkar Senior Solutions Architect ArchivedAmazon Web Services Security Overview of AWS Lambda Page 16 • Marc Brooker Senior Principal Engineer • Osman Surkatty Senior Security Engineer Further Reading For additional information see: • Shared Responsibility Model which explains how AWS thinks about security in general • Security best practices in IAM which covers recommendations for AWS Identity and Access Management (IAM) service • Serverless App lication Lens covers the AWS well architected framework and identifies key elements to help ensure your workloads are architected according to best practices • Introduct ion to AWS Security provides a broad introduction to thinking about security in AWS • Amazon Web Services: Risk and Compliance provides an overview of com pliance in AWS Document Revisions Date Description March 2019 First publication January 2021 Republished with significant updates Notes 1 At the time of publishing encryption of data at rest was not available for Amazon EventBridge Continue to monitor the service homepages for updates on these capabilities Archived
|
General
|
consultant
|
Best Practices
|
Serverless_Architectures_with_AWS_Lambda
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Serverless Architectures with AWS Lambda Overview and Best Practices November 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction What Is Serverless? 1 AWS Lambda —the Basics 2 AWS Lamb da—Diving Deeper 4 Lambda Function Code 5 Lambda Function Event Sources 9 Lambda Function Configuration 14 Serverless Best Practices 21 Serverless Architecture Best Practices 21 Serverless Development Best Practices 34 Sample Serverless Architectures 42 Conclusion 42 Contributors 43 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Since its introduction at AWS re:Invent in 2014 AWS Lambda has continued to be one of the fast est growing AWS services With it s arrival a new application architecture paradigm was created— referred to as serverless AWS now provides a number of different services that allow you to build full application stacks without the need to manage any servers Use cases like web or mobile backends realtime data processing chatbots and virtual assistants Internet of Things (IoT) backends and more can all be fully serverless For the logic layer of a serverless application you can execute your business logic using AWS Lambda Developers and organizations are finding that AWS Lambda is enabling much faster development speed and experimentation than is possible when deploying applications in a traditional server based environment This whitepaper is meant to provide you with a broad overview of AWS Lamb da its features and a slew of recommendations and best practices for building your own serverless applications on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 1 Introduction What Is Serverless ? Serverless most often refers to serverless applications Serverless applications are ones that don't require you to provision or manage any servers You can focus on your core product and business logic instead of responsibilities like operating system ( OS) access control OS patching provisioning right sizing scaling and availability By building your application on a serverless platform the platform manages these responsibilities for you For service or platform to be considered serverless it shoul d provide the following capabilities : • No server management – You don’t have to provision or maintain any servers There is no software or runtime to install maintain or administer • Flexible scaling – You can scale your application automatically or by adjusting its capacity through toggling the units of consumption (for example throughput memory) rather than units of individual servers • High availability – Serverless applications have built in availability and fault to lerance You don't need to architect for these capabilities because the services running the application provide them by default • No idle capacity – You don't have to pay for idle capacity There is no need to pre provision or over provision capacity for things like compute and storage T here is no charge when your code is n’t running The AWS Cloud provides many different services that can be components of a serverless application These include capabilities for : • Compute – AWS Lambda 1 • APIs – Amazon API Gateway2 • Storage – Amazon Simple Storage Service (Amazon S3 )3 • Databases –Amazon DynamoDB4 • Interprocess messaging – Amazon Simple Notification Service ( Amazon SNS)5 and Amazon Simple Queue Service ( Amazon SQS)6 • Orchestration – AWS Step Functions7 and Amazon CloudWatch Events8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 2 • Analytics – Amazon Kinesis9 This whitepaper will focus on AWS Lambda the compute layer of your serverless application where your code is executed and the AWS developer tools and services that enable best practices when building and maintaining serverless applications with Lambda AWS Lambda—the Basics Lambda is a high scale provision free serverless compute offering based on functions It provides t he cloud logic layer for your application Lambda functions can be trigg ered by a variety of events that occur on AWS or on supporting third party services They enabl e you to build reactive event driven systems When there are multiple simultaneous events to respond to Lambda simply runs more copies of the function in para llel Lambda functions scale precisely with the size of the workload down to the individual request Thus the likelihood of having an idle server or container is extremely low Architectures that use Lambda functions are designed to reduce wasted capacit y Lambda can be described as a type of serverless Function asaService (FaaS) FaaS is one approach to building event driven computing systems It relies on functions as the unit of deployment and execution Serverless FaaS is a type of FaaS where no virtual machines or containers are present in the programming model and where the vendor provides provision free scalability and built in reliability Figure 1 shows t he relationship among event driven computing FaaS and serverless FaaS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 3 Figure 1: The relationship among event driven computing FaaS and serverless FaaS With Lambda you can run code for virtually any type of application or backend service Lambda run s and scale s your code with high availability Each Lambda function you create contains the code you want to execute the configuration that defines how your code is executed and optionally one or more event sources that detect events and invoke your function as they occur These elements are covered in more detail in the next section An example event source is API Gateway which can invoke a Lambda function anytime an API method created with API Gateway receives an HTTPS request Another example is Amazon SNS which has the ability to invoke a Lambda function anytime a new message is posted to an SNS topic Many event source options can trigger your Lambda function For the full list see this documentat ion10 Lambda also provide s a RESTful service API which includes the ability to directly invoke a Lambda function 11 You can use this API to execute your code directly without confi guring another event source You don’t need to write any code to integrate an event source with your Lambda function manage any of the infrastructure that detects events and delivers them to your function or manage scaling your Lambda function to match the number of events that are delivered You can focus on your application logic and configure the event sources that cause your logic to run This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 4 Your La mbda function runs within a (simplified) architecture that looks like the one shown in Figure 2 Figure 2: Simplified architecture of a running Lambda function Once you configure an event source for your function your code is invoked when the event occurs Your code can execute any business l ogic reach out to external web services integrate with other AWS services or anything else your application requires All of the same capabilities and software design principles that you’re used to for your language of choice will apply when using Lambd a Also because of the inherent decoupling that is enforced in serverless applications through integrating Lambda functions and event sources it ’s a natural fit to build microservices using Lambda functions With a basic understanding of serverless princ iples and Lambda you might be ready to start writing some code The following resources will help you get started with Lambda immediately : • Hello World tutorial: http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml12 • Serverless workshops and walkthroughs for building sample applications: https://githubcom/awslabs/aws serverless workshops13 AWS Lambda—Diving Deeper The remainder of this whitepaper will help you understand the components and features of Lambda followed by best practices for various aspects of building and owning serverless applications using Lambda This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 5 Let’s begin our deep dive by further expanding and explaining each of the major components of Lambda that we described in the introduction: function code event sources and function configuration Lambda Function Code At its core you use Lambda to execute code This can be code that you’ ve written in any of the languages supported by Lam bda (Java Nodejs Python or C# as of this publication) as well as any code or packages you’ve uploaded alongside the code that you’ve written You’re free to bring any librari es artifacts or compiled native binaries that can execute on top of the runtime environment as part of your function code package If you want you can even execute code you’ve written in another programming language (PHP Go SmallTalk Ruby etc) as long as you stage and invoke that code from within one of the support languages in the AWS Lambda runtime environment (see this tutorial )14 The Lambda runtime environment is based on an Amazon Linux AMI (see current environment details here ) so you should compile and test the components you plan to run inside of Lambda within a matching environment15 To help you perform this type of testing prior to running within Lambda AWS provides a set of to ols called AWS SAM Local to enable local testing of Lambda functions16 We discuss these tools in the Serverless Development Best Practices section of this whitepaper The Function Code Package The function code package contains all of the assets you want to have available locally upon execution of your code A package will at minimum include the code function you want the Lambda se rvice to execute when your function is invoked However it might also contain other assets that your code will reference upon execution for example addition al files classes and libraries that your code will import binaries that you would like to execute or configuration files that your code might reference upon invocation The maximum size of a function code package is 50 MB compressed and 250MB extracted at the time of this publication (For the full list o f AWS Lambda l imits see this documentation 17) When you create a Lambda function (through the AWS Management Console or using the CreateFunction API) you can referenc e the S3 bucket and object This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 6 key where you’ve uploaded the package 18 Alternatively you can upload the code package directly when you create the function Lambda will then store your code package in an S3 bucket manage d by the service The same options are available when you publish updated code to existing Lambda functions (through the UpdateFunctionCode API)19 As events occur your code package will be downloaded from the S3 bucket installed in the Lambda runtime environment and invoked as needed This happens on demand at the scale required by the number of events triggering your function within an environm ent ma naged by Lambda The Handler When a Lambda function is invoked code execution begins at what is called the handler The handler is a specific code method (Java C#) or function (Nodejs Python) that you’ve created and included in your package You specify the handler when creating a Lambda function Each language supported by Lamb da has its own requirements for how a function handler can be defined and referenced within the package The following links will help you get started with each o f the supported languages Language Example Handler Definition Java20 MyOutput output handlerName(MyEvent event Context context) { } Nodejs21 exportshandlerName = function(event context callback) { // callback parameter is optional } Python22 def handler_name(event context): return some_value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 7 Language Example Handler Definition C#23 myOutput HandlerName( MyEvent event ILambdaContext context) { } Once the handler is successfully invoked inside your Lambda f unction the runtime environment belongs to the code you’ve written Your Lambda function is free to execute any logic you see fit driven by the code you’ve written that starts in the handler This means you r handler can call other methods and functions within the files and classes you’ve uploaded Your code can import third party libraries that you’ve uploaded and install and execute native binaries that you’ve uploaded (as long as they can run on Amazon Linux ) It can also interact with other AWS services or make API requests to web ser vices that it depends on etc The Event Object When your Lambda function is invoked in one of the supported languages one of the parameters provided to your handler function is an event object The event differ s in structure and contents depending o n which event source created it The contents of the event parameter include all of the data and metadata your Lambda function needs to drive its logic For example an event created by API Gateway will contain details related to the HTTPS request that was made by the API client (for example path query st ring request body ) whereas an event created by Amazon S3 when a new object is created will include details about the bucket and the new object The Context Object Your Lambda function is also provided with a context object The context object allows your function code to interact with the Lambda execution environment The contents and structure of the context object vary based on the language runtime your Lambda function is using but at minimum it will contain: • AWS RequestId – Used to track specific invocations of a Lambda function (important for error reporting or when contacting AWS Support) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 8 • Remaining time – The amount of time in milliseconds that remain befo re your function timeout occurs (Lambda functions can run a maximum of 300 seconds as of this publishing but you can configure a shorter timeout) • Logging – Each language runtime provides the ability to stream log statements to Amazon CloudWatch Logs T he context object contain s information about which C loudWatch Logs stream your log statements will be sent to For more information about how logging is handled in each language runtime see the following : o Java24 o Nodejs25 o Python26 o C#27 Writing Code for AWS Lambda —Statelessness and Reuse It’s important to understand the central tenant when writing code for Lambda: your code cannot make assumptions about stat e This is because Lambda fully manag es when a new function container will be created and invoked for the first time A container could be getting invoked for the first time for a number of reasons For example the events triggering your Lambda function a re increasing in concurrency beyond the number of containers previously created for your function an event is triggering your Lambda function for the first time in several minutes etc While Lambda is responsible for scaling your function containers up and down to meet actual demand your code needs to be able to operate accordingly Although Lambda won’t interrupt the processing of a specific invocation that’s already in flight your code doesn’t need to account for that level of volatility This mean s that your code cannot make any assumptions that state will be preserved from one invocation to the next However each time a function container is created and invoked it remain s active and available for subsequent invo cations for at l east a few minutes before it is terminated When subsequent invocations occur on a container that has already been active and invoked at least once before we say that invocation is running on a warm container When an invocation occurs for a Lambda function that requires your function code package to be created and invoked for the first time we say the invocation is experiencing a cold start This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 9 Figure 3: Invocations of warm function containers and cold function containers Depending on the logic your code is executing understanding how your code can take advantage of a warm container can result in faster code execution inside of Lambda This in turn results in quicker responses and lower cost For more details and examples of how to improve your Lambda function performance by taking advantage of warm containers see the Best Practices section later in this w hitepaper Overall each language that Lambda supp orts has its own model for packaging source code and po ssibilities for optimizing it V isit this page to get started with each of the supported languages28 Lambda Function Event Sources Now that you know what goes into the code of a Lambda function let’s look at the event sources or triggers that invoke your code While Lambda provide s the Invok e API that enables you to directl y invoke your function you will likely only use it for testing and operational purposes29 Instead you can associate your Lambda function with event sources occurring within AWS services that will invoke your function as needed You don’t have to write scale or maintain any of the software that integrates the event source with your Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 10 Invocation Patterns There are two models for invoking a Lambda function : • Push Model – Your Lambda function is invoked every time a particular event occurs within another AWS service (for example a new object is added to an S3 bucket) • Pull M odel – Lambda poll s a data source and invoke s your function with any new records that arrive at the data source batching new records toge ther in a single function invocation (for example new records in an Amazon Kinesis or Amazon DynamoDB stream) Also a Lambda function can be executed synchronously or asynchronously You choose this using the parameter InvocationType that’s provided when invoking a Lambda function This parameter has three possible values: • RequestResponse – Execute synchronously • Event – Execute asynchronously • DryRun – Test that the invocation is permitted for the caller but don’t execute the function Each event source dictate s how your function can be invoked The event source is also responsible for crafting its own event parameter as we discussed earlier The following tables provide details about how some of the more popular event sources can integrate with your La mbda functions You can find the full list of supported event sources here 30 Push Model Event Source s Amazon S3 Invocation Model Push Invocation Type Event Description S3 event notifications (such as ObjectCreated and ObjectRemoved) can be configured to invoke a Lambda function as they are published This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 11 Example Use Cases Create image modifications (thumbnails different resolutions watermarks etc ) for images that users upload to an S3 bucket through your application Process raw data uploaded to an S3 bucket and move transformed data to another S3 bucket as part of a big data pipeline Amazon API Gateway Invocation Model Push Invocation Type Event or RequestResponse Description The API methods you create with API Gateway can use a Lambda function as their service backend If you choose Lambda as the integration type for an API method your Lambda function is invoked synchronously (the response of your Lambda function serve s as the API response) With this integration type API Gateway can also act as a simple proxy to a Lambda function API Gateway will perform no processing or transformation on its own and will pass along all the contents of the r equest to Lambda If you want an API to invoke your function asynchronously as an event and return immediately with an empty response you can use API Gateway as an AWS Service Proxy and integrate with the Lambda Invoke API providing the Event InvocationType in the request header This is a great option if your API clients don’t need any information back from the request and you want the fastest response time possible (This option is great for pushing user interactions on a website or app to a service backend for analysis ) Example Use Cases Web service backends (web application mobile app microservice architectures etc) Legacy service integration (a Lambda function to transform a legacy SOAP backend into a new modern REST API) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 12 Any other use cases where HTTPS is the appropriat e integration mechanism between application components Amazon SNS Invocation Model Push Invocation Type Event Description Messages that are published to an SNS topic can be delivered as events to a Lambda function Example Use Cases Automated responses to CloudWatch alarms Processing of events from other services (AWS or otherwise) that can natively publish to SNS topics AWS CloudFormation Invocation Model Push Invocation Type RequestResponse Description As part of deploying AWS CloudFormation stacks you can specify a Lambda function as a custom resource to execute any custom commands and provide data back to the ongoing stack creation Example Use Cases Extend AWS CloudFormation capabilities to include AWS service features not yet natively supported by AWS CloudFormation Perform custom validation or reporting at key stages of the stack creation/update/delete process Amazon CloudWatch Events Invocation Model Push This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 13 Invocation Type Event Description Many AWS services publish resource state changes to CloudWatch Events Those events can then be filtered and routed to a Lambda function for automated responses Example Use Cases Event driven operations automation (for example take action each time a new EC2 instance is launched notify an appropriate mailing list when AWS Trusted Advisor reports a new status change) Replacement for tasks previously accomplished with cron (CloudWatch Events supports scheduled events) Amazon Alexa Invocation Model Push Invocation Type RequestResponse Description You can write Lambda f unctions that act as the service backend for Amazon Alexa Skills When an Alexa user interacts with your skill Alexa’s Natural Language Understand and Processing capabilities will deliver their interactions to your Lambda functions Example Use Cases An Alexa skill of your own Pull Model Event Source s Amazon DynamoDB Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a DynamoDB stream multiple times per second and invoke your Lambda function with the batch of updates that have been published to the stream since the last batch You can configure the batch size of each invocation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 14 Example Use Cases Application centric workflows that should be triggered as changes occur in a DynamoDB table (for example a new user registered an order was placed a friend request was accepted etc) Replication of a DynamoDB table to another region (for disaster recover y) or another service (shipping as logs to an S3 bucket for backup or analysis) Amazon Kinesis Streams Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a Kinesis stream once per second for each stream shard and invoke your Lambda function with the next records in the shard You can define the batch size for the number of records delivered to your function at a time as well as the number of Lambda function containers executing concurrently (number of stream shards = number of concurrent function containers) Example Use Cases Realtime data processing for big data pipelines Realtime alerting/monitoring of streaming log statements or other application events Lambda Function Configuration After you write and package your Lambda function code on top of choosing which event sources will trigger your function you have various configuration options to set that define how your code is executed within Lambda Function Memory To define the resources allocated to y our executing Lambda function you’re provided with a single dial to increase/decrease function resources: memory/RAM You can allocate 128 MB of RAM up to 15 GB of RAM to your Lambda function Not only will this dictate the amount of memory available to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 15 your function code during execution but the same dial will also influence the CPU and n etwork resources available to your function Selecting the appropriate m emory allocation is a very important step when optimizing the price and performance of any Lambd a function Please review the best practices later in this whitepaper for more specifics on optimizing performance Versions and Aliases There are times where you might need to reference or revert your Lambda function back to code that was previously deployed Lambda lets you version your AWS Lambda f unctions Each and every Lambda f unction has a default version built in: $LATEST You can address the most recent code that has been uploaded to your Lambda function through the $LATEST version You can ta ke a snapshot of the code that’s currently referred to by $LATEST and create a numbered version through the PublishVersion API31 Also when updating your function code thro ugh the UpdateFunctionCode API there is an optional Boolean parameter publish32 By setting publish: true in your request Lambda will create a new Lambda function version incremented from the last published version You can invoke each version of your Lambda function independently at any time Each version has its own Amazon Resource Name (ARN) referenced like this: arn:aws:lambda:[region]:[account] :function:[fn_name] :[version] When calling the Invoke API or creating an event source for your Lambda function you can also specify a specific version of the Lambda function to be executed33 If you don ’t provide a version number or use the ARN that doesn’t contain the version number $LATEST is invoked by default It’s important to know that a Lambda f unction container is specific to a particular version of your function So for example if there are already several function containers deployed and available in the Lambda runtime environment for version 5 of the f unction version 6 of the same function will not be able to execute on top of the existing version 5 containers —a different set of containers will be installed and managed for each function version This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 16 Invoking your Lambda functions by their version number s can be useful during testing and operational activities However we don’t recommend having your Lambda function be triggered by a specific version number for real application traffic Doing so would require you to update all of the triggers and clients invoking your Lambda function to point at a new function version each time you wanted to update your code Lambda aliases should be used here instead A function alias allows you to invoke and point event sources to a specific Lambda function version However you can update what version that alias refers to at any time For example your event sources and clients that are invoking version number 5 through the alias live may cut over to version number 6 of your function as soon as you update the live alias to instead point at version number 6 Each alias can be referred to within the ARN similar to when referring to a function version number: arn:aws:lambda:[region]:[account] :function:[fn_name] :[alias] Note : An alias is simply a pointer to a specific version number This means that if you have multiple different aliases pointed to the same Lambda function version at once requests to each alias are executed on top of the same set of installed function containers This is important to understand so that you don’ t mistakenly point multiple aliases at the same function version number if requests for each alias are intended to be processed separately Here are s ome example suggestions for Lambda aliases and how you might use them: • live/prod/active – This could represent the Lambda function version that your production triggers or that clients are integrating with • blue/green – Enable the blue/green deployment pattern through use of aliases • debug – If you’ve created a testing stack to debug your applications it can integrate with an alias like this when you need to perform a deeper analysis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 17 Creating a good documented strategy for your use of function aliases en able s you to have sophisticated serverless deployment and operations practices IAM Role AWS Identity and Access Management (IAM) provides the capability to create IAM policies that define permissions for interacting with AWS s ervices and APIs34 Policies can be associated with IAM roles Any access key ID and secret access key generate d for a particular role is authorized to perform the actions defined in the policies attached to that role For more information about IAM best practices see this documentation 35 In the context of Lambda you assign an IAM role (called an execution role) to each of your Lambda functions The IAM p olicies attached to that role define what AWS s ervice APIs your function code is authorized to interact with There are t wo benefits: • Your source code is n’t required to perform any AWS credential management or rotation to interact with the AWS APIs Simply using the AWS SDKs and the default credential provider result s in your Lambda function automatically using temporary cre dentials associated with the execution role assigned to the function • Your source code is decoupled from its own security posture If a developer attempts to change your Lambda function code to integrate with a service that the function doesn’t have access to that integration will fail due to the IAM role assigned to your function (Unless they have used IAM credentials that are separate from the execution role you should use static code analysis tools to ensure that no AWS credentials are present in your source code) It’s important to assign each of your Lambda functions a specific separate and least privilege IAM role This strategy ensures that each Lambda f unction can evolve independently without increasing the authorization scope of any other Lambda functions Lambda Function Permissions You can define which push model event sources are allowed to invoke a Lambda function through a concept called permissions With permissions you declare a function policy that lists the AWS Resource Names (ARNs) that are allowed to invoke a function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 18 For pull model event sources (for example Kinesis streams and DynamoDB streams) you need to ensure that the appropriate actions are permitted by the IAM execution role assig ned to your Lambda function AWS provides a set of managed IAM roles associated with each of the pull based event sources if you don’t want to manage the permissions required However to ensure least privilege IAM policies you should create your own IAM roles with resource specific policies to permit access to just the intended event source Network Configuration Executing your Lambda function occurs through the use of the Invoke API that is part of the AWS Lambda service API s; so there is no direct inbo und network access to your function to manage However y our function code might need to integrate with external dependencies (internal or publically hosted web services AWS services databases etc) A Lambda function has two broad options for outbound network connectivity: • Default – Your Lambda function communicate s from inside a virtual private cloud (VPC) that is managed by Lambda It can connect to the internet but not to any privately deployed resources running within your own VPCs • VPC – Your Lamb da function communicate s through an Elastic Network Interface (ENI) that is provisioned within the VPC and subnets you choose with in your own account These ENIs can be assigned security groups and traffic will route based on the route tables of the subne ts those ENIs are placed within —just the same as if an EC2 instance were placed in the same subnet If your Lambda function does n’t require connectivity to any privately deployed resources we recommend you select the d efault networking option Choosing the VPC option will require you to manage: • Selecting appropriate subnets to ensure multiple Availability Zones are being used for the purposes of high availability • Allocating the appropriate number of IP a ddresses to each subnet to manage capacity • Implementing a VPC network design that will permit your Lambda functions to have the connectivity and security required This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 19 • An increase in Lambda cold start times if your Lambda function invocation patterns require a new ENI to be created just in time (ENI creation can take many seconds today ) However if your use case requires private connectivity use the VPC option with Lambda F or deeper guidance if you plan to deploy your Lambda functions with in your own VPC see this documentation 36 Environment Variables Software Development Life Cycle (SDLC) best practice dictates that developers separate their code and their config You can achieve this by using environment variables with Lambda Environment variables for Lambda functions enable you to dynamically pass data to your function code and libraries without making changes to your code Environment variables are key value pairs that you create and modify as par t of your function configuration By default these variables are encrypted at rest For any sensitive information that will be stored as a Lambda function environment variable we recommend you encrypt those values using the AWS Key Management Service (AWS KMS) prior to function creation storing the encrypted cyphertext as the variable value Then have your Lambda function decrypt that variable in memory at execution time Here are some e xamples of how you might decide to use environment variables: • Log settings ( FATAL ERROR INFO DEBUG etc) • Dependency and/or database connection strings and credentials • Feature flags and toggles Each version of your Lambda f unction can have its own e nvironment variable values However once the values are established for a numbered Lambda funct ion version they cannot be changed To make changes to your Lambda function environment variables you can change them to the $LATEST version and then publish a new version that contains the new environment variable values This enables you to always keep track of which e nvironment variable values are associated with a previous version of your function This is often import ant during a rollback procedure or when triaging the past state of an application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 20 Dead Letter Queues Even in the ser verless world exceptions can still occur (For example perhaps you’ve uploaded new function code that does n’t allow the Lambda event to be parsed successfully or there is an operational event within AWS that is preventing the function from being invoked ) For asynchronous event sources (the event InvocationType ) AWS owns the client software that is responsible for invoking your function AWS does not have the ability to synchronously notify you if the invocations are successful or not as invocations occur If an exception occurs when trying to invoke your function in these models the invocation will be attempted two more times (with back off between the retries) After the third attempt the event is either discarded or placed onto a dead letter queu e if you configured one for the function A dead letter queue is either an SNS topic or SQS queue that you have designated as the destination for all failed invocation events If a failure event occurs the use of a dead letter queue allow s you to retain just the messages that failed to be processed during the event Once your function is able to be invoked again you can target those failed events in the dead letter queue for reprocessing The mechanisms for reprocessing/retrying the function invocation attempts placed on to your dead l etter queue is up to you For more information about dead letter queues see this tutorial 37 You should use dead letter queues if it ’s important to your application that all invocations of your Lambda function complete eventually even if execution is delayed Timeout You can designate the maximum amount of time a single function execution is allowed to complete before a timeout is returned The maximum timeout for a Lambda function is 300 seconds at the time of this publication which means a single invocation of a Lambda function cannot execute longer than 300 seconds You should not always set the timeout for a Lambda function to the maximum There are many cases where an application should fail fast Because your Lambda function is billed based on execution time in 100 ms increments avoiding lengthy timeouts for functions can prevent you from being billed whil e a function is simply waiting to timeout (perhaps an external dependency is unavailable you’ve accidentally programmed an infinite loop or another similar scenario) Also once execution completes or a timeout occurs for your Lambda function and a respo nse is returned all execution ceases This includes any background This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 21 processes subprocesses or asynchronous processes that your Lambda function might have spawned during execution So you should not rely on background or asynchronous processes for critica l activities Your code should ensure those activities are completed prior to timeout or returning a response from your function Serverless Best Practices Now that we’ve covered the components of a Lambda based serverless application let’s cover some rec ommended best practices There are many SDLC and server based architecture best practices that are also true for serverless architectures : eliminate single points of failure test changes prior to deployment encrypt sensitive data etc However achieving best practices for serverless architectures can be a different task because of how different the operating model is You don ’t have access to or concerns about an operating system or any lower level components in the infrastructure Because of this your focus is solely on your own application code/architecture the development processes you follow and the features of the AWS services your application leverages that enable you to follow best practices First we review a set of best practices for designing your serverless architecture according to the AWS Well Architected Framework Then we cover some best practices and recommendations for your development process when building serverless applications Serverless Architecture Best Practices The AWS Well Architected Framework includes strategies to help you compare your workload against our best practices and obtain guidance to produce stable and eff icient systems so you can focus on functional requirements 38 It is based on five pillars: security reliability performance efficiency cost optimization and operational excellence Many of the guidelines in the framework apply to serverless applications However there are specific implementation steps or patterns that are unique to serverless architectures In the following sections we cover a set of recommendations that are serverless specific for each of the Well Architected pillars This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 22 Security Best Pr actices Designing and implementing security into your applications should always be priority number one —this doesn’t change with a serverless architecture The major difference for securing a serverless application compared to a server hosted application is obvious —there is no server for you to secure However you still need to think about your application ’s security There is still a shared responsibility model for serverless security With Lambda and serverless architectures rather than implementing application se curity through things like anti virus/malware software file integrity monitoring intrusion detection/prevention systems firewalls etc you ensure security best practices through writing secure application code t ight access control over source code changes and following AWS security best practices for each of the services that your Lambda functions integrate with The following is a brief list of serverless security best practices that should apply to many serverless use cases al though your own specific security and compliance requirements should be well understood and might include more than we describe here • One IAM R ole per Function Each and every Lambda function within your AWS a ccount should have a 1:1 rela tionship with an IAM role Even if multiple functions begin with exactly the same policy always decouple your IAM roles so that you can ensure least privilege policies for the future of your function For example if you shared the IAM role of a Lambda f unction that needed access to an AWS KMS key across multiple Lambda functions then all of those functions would now have access to the same encryption key • Temporary AWS Credentials You should not have any long lived AWS credentials included within your Lambda function code or configuration (This is a great use for static code analysis tools to ensure it never occurs in your code base!) For most cases the IAM execution role is all that’s required to integrate with other AWS services Simply create AWS service clients within your code through the AWS SDK without providing any credentials The SDK automatically manage s the retrieval and rotation of the temporary This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 23 credentials generated for your role The following is an example usin g Java AmazonDynamoDB client = AmazonDynamoDBClientBuilderdefaultClient(); Table myTable = new Table(client "MyTable"); This code snippet is all that’s required for the AWS SDK for Java to create an object for interacting with a DynamoDB table that automatically sign its requests to the DynamoDB APIs using the temporary IAM creden tials assigned to your function39 However t here might be cases where the execution role is not sufficient for the type of access your function requires This can be the case for some cross account integrations your Lambda function might perform or if you have user specific access control policies through com bining Amazon Cognito40 identity roles and DynamoDB fineg rained access control 41 For cross account us e cases you should grant your execution role should be granted access to the AssumeRole API within the AWS Security Token Service and integrate d to retrieve temporary access credentials 42 For user specific access control policies your function should be provided with the user identity in question and then integrate d with the Amazon Cognito API GetCredentialsForIdentity 43 In this case it’s imperative that you ensure your code appropriately manages these credentials so that you are leveraging the correct credentials for each user associated with that invocation of your Lambda function It’s common for an application to encrypt and store these per user credentials in a place like DynamoDB or Amazon ElastiCache as part of user session data so that they can be retrieved with reduced latency and more scalability than regenerating them for subsequent requests for a returning user44 • Persisting Secret s There are cases where you may have long lived secrets (for example database credentials dependency service access keys encryption keys etc) that your Lambda function needs to use We recommend a few options for the lifecycle of secrets management in your application : o Lambda Environment Variables with Encryption Helpers45 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 24 Advantages – Provided directly to your function runtime environment minimizing the latency and code required to retrieve the secret Disadvantages – E nvironment variables are coupled to a function version Updat ing an environment variable requires a new function version (more rigid but does provide stable version history as well) o Amazon EC2 Systems Manager Parameter Store46 Advantages – Fully decoupled from your Lambda functions to provide maximum flexibility for how secrets and functions relate to each other Disadvantag es – A request to Parameter Store is required to retrieve a parameter/secret While not substantial this does add latency over environment variables as well as an additional service dependency and requires writing slightly more code • Using Secrets Secret s should always only exist in memory and never be logged or written to disk Write code that manages the rotation of secrets in the event a secret needs to be revoked while your application remains running • API Authorization Using API Gateway as the event source for your Lambda function is unique from the other AWS service event source options in that you have ownership of authentication and authorization of your API clients API Gateway can perform much of the heavy lifting by providing things like native AWS SigV4 authentication 47 generated client SDKs 48 and custom authorizers 49 However you’re still responsible for ensuring that the security posture of your APIs meets the bar you’ve set For more information about API s ecurity best practices see this documentation 50 • VPC Security If your Lambda function requires access to resources deployed inside a VPC you should apply network security best practices through use of least privilege s ecurity groups Lambda function specific subnets network ACLs and route tables that allow traffic coming only from your Lambda functions to reach intended destinations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 25 Keep in mind that these practices and policies impact the way that your Lambda functions connect to their dependencies Invoking a Lambda function still occurs through event sources an d the Invoke API (neither are affected by your VPC configuration) • Deployment Access Control A call to the UpdateFunctionCode API is analogous to a code deployment Moving an alias through the UpdateAlias API to that newly published version is analogous to a code release Treat access to the Lambda APIs that enable function code/aliases with extreme sensitivity As such you should eliminate direct user access to these APIs for any functions (production functions at a minimum) to remove the possibility of human error Making code changes to a Lambda function should be achieved through automation With that in mind the entry point for a deployment to Lambda become s the place where your continuous integration/continuous delivery ( CI/CD ) pipeline is initiated This may be a release branch in a repository an S3 bucket where a new code package is uploaded that triggers an AWS CodePipeline pipeline or somewhere else that’s specific to your organization and processes51 Wherever it is it becomes a new place where you should enforce stringent access control mechanisms that fit your team structure and roles Reliability Best Practices Serverless applications can be built to support mission critical use case s Just as with any mission critical application it’s important that you architect with the mindset that Werner Vogels CTO Amazoncom advocates for “E verything fails all the time” For serverless applications this could mean introducing logic bugs into your code failing application dependencies and other similar application level issues that you should try and prevent and account for using existing best practices that will still apply to your serverless applications For infrastructure level service ev ents where you are abstracted away from the event for serverless applications you should understand how you have architected your application to achieve high availability and fault tolerance High Availability High availability is important for productio n applications The availability posture of your Lambda function depends on the number of Availability Zones it can be executed in If your function uses the d efault network environment it is This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 26 automatically available to execute within all of the Availabili ty Zones in that AWS Region Nothing else is required to configure high availability for your function in the d efault network environment If your function is deployed within your own VPC the subnets (and their respective A vailability Zones) define if your function remains available in the event of an Availability Zone outage Therefore it’s important that your VPC design include s subnets in multiple Availability Zones In the event that an Availability Zone outage occurs it ’s important that your remaining subnets continue to have adequate IP addresses to support the number of concurrent functions required For information on how to calculate the number of IP addresses your functions require see this documentation 52 Fault Tolerance If the application availability you need requires you to take advantage of multiple AWS Regions you must take this into account up front in your design It’s not a complex exercise to replicate your Lambda function code package s to multiple AWS R egions What can be complex like most multi region application designs is coordinating a failover decision across all tiers of your application stack This means you need t o understand and orchestrate the shift to another AWS Region —not just for your Lambda functions but also for your event sources (and dependencies further upstream of your event sources) and persistence layers In the end a multi region architecture is very application specific The most important thing to do to make a multi region design feasible is to account for it in your design up front Recovery Consider how your serverless application should behave in the event that your functions cannot be exec uted For use cases where API Gateway is used as the event source this can be as simple as gracefully handling error messages and providing a viable if degraded user experience until your functions can be successfully executed again For asynchronous use cases it can be very important to still ensure that no function invocations are lost during the outage period To ensure that all received events are processed after your function has recovered you should take advantage of d ead letter queues and implement how to process events placed on that queue after recovery occurs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 27 Performance Efficiency Best Practices Before we dive into performance best practices keep in mind that if your use case can be achieved asynchrono usly you might not need to be concerned with the performance of your function (other than to optimize costs) You can leverage one of the event sources that will use the e vent InvocationType or use the pull based invocation model Those methods alone might allow your application logic to proceed while Lambda continues to process the event separately If Lambda function execution time is something you want to optimize the execution duration of your Lambda function will be primarily impacted by three th ings (in order of simplest to optimize): the resources you allocate in the function configuration the language runtime you choose and the code you write Choosing the Optimal Memory Size Lambda provides a single dial to turn up and down the amount of com pute resources available to your function —the amount of RAM allocated to your function The amount of allocated RAM also impact s the amount of CPU time and network bandwidth your function receives Simply choosing the smallest resource amount that runs your function adequately fast is an anti pattern Because Lambda is billed in 100 ms increments this strategy might not only add latency to your application it might even be more expensive overall if the added latency outweighs the resource cost savings We recommend that you test your Lambda function at each of the available resource levels to determine what the optimal level of price/performance is for your application You’ll discover that the performance of your function should improve logarithmically as resource levels are increased The logic you’re executing will define the lower bound for function execution time T here will also be a resource threshold where any additional RAM/CPU/bandwidth available to your function no longer provide s any substantial performance gain However pricing increases linearly as the resource levels increase in Lambda Your tests should find where the logarithmic function bends to choose the optimal configuration for your function The following graph shows how the ideal me mory allocation to an example function can allow for both better cost and lower latency Here the additional compute cost per 100 ms for using 512 MB over the lower memory options is outweighed by the amount of latency reduced in the function by allocatin g more resources But after 512 MB the performance gains are diminished for this This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 28 particular function’s logic so the additional cost per 100 ms now drive s the total cost higher This leaves 512 MB as the optimal choice for minimizing total cost Figure 4: Choosing the o ptimal Lambda function memory s ize The memory usage for your function is determined per invo cation and can be viewed in CloudWat ch Logs 53 On each invo cation a REPORT: entry is made as shown below REPORT RequestId: 3604209a e9a311e6939a754dd98c7be3 Duration: 1234 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB By analyzing the Max Memory Used: field you can determine if your function needs more memory or if you over provisioned your function's memory size Language Runtime Performance Choosing a language runtime performance is obviously dependent on your level of comfort and skills with each of the supported runtimes But if performance is the driving consideration for your application the performance characteristics of each language are what you might expect on Lambda as you would in another runtime environment: the compiled languages (Java and NET) incur the largest initial startup cost for a container’s first invocation but show the best performance for subsequent invocations The interpreted languages (Nodejs and Python) have very fast initial invocation times compared to the compiled language s but can’t reach the same level of maximum performance as the compiled languages do This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 29 If your application use case is both very latency sensitive and susceptible to incurring the initial invocation cost frequently (very spiky traffic or very infrequent use) we recommend one of the interpreted languages If your application does not experience large peaks or valleys within its traffic patterns or does not have user experiences blocked on Lambda function response times we recomm end you choose the langua ge you’re already most comfortable with Optimizing Your Code Much of the performance of your Lambda function is dictated by what logic you need your Lambda function to execute and what its dependencies are We won’t cover what all those optimizations coul d be because they vary from application to application But there are some general best practices to op timize your code for Lambda These are related to taking advantage of container reuse ( as describes in the previous overview) and minimizing the initial cost of a cold start Here are a few examples of how you can improve the performance of your function code when a war m container is invoked: • After initial execution store and reference any externalized configuration or dependencies th at your code retrieves locally • Limit the reinitialization of variables/objects on every invocation (use global/static variables singletons etc) • Keep alive and reus e connections (HTTP database etc) that were established during a previous invocation Finally you should do the following to limit the amount of time that a cold start takes for your L ambda function: 1 Always use the d efault network en vironment unless connectivity to a resource within a VPC via private IP is required This is because there are additional cold start scenarios related to the VPC configuration of a Lambda function (related to creating ENIs within your VPC) 2 Choose an interpreted language over a compiled language This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 30 3 Trim your function code package to only its runtime necessities This reduce s the amount of time that it takes for your code package to be downloaded from Amazon S3 ahead of invocation Understanding Your Application Performance To get visibility into the various components of your application architecture which could include one or more Lambda functions we recommend that you use AWS X Ray54 XRay lets you trace the full lifecycle of an application request through each of its component parts showing the latency and other metrics of each component separately as shown in the following figure Figure 5: A service m ap visualized by AWS X Ray To learn more about X Ray see this documentation 55 Operational Excellence Best Practices Creating a serverless application removes many operational burdens that a traditional application bring s with it This doesn’t mean you should reduce your focus on operational excellence It means that you can narrow your operatio nal focus to a smaller number of responsibilities and hopefully achieve a higher level of operational excellence Logging Each language runtime for Lambda provides a mechanism for your function to deliver logged statements to CloudWatch Logs Making adequate use of logs goes without saying and isn’ t new to Lambda and serverless architectures Even though it ’s not considered best practice today many operational teams depend This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 31 on viewing logs as they are generated on top of the server an application is deployed on That simply isn’t possible with Lambda because there is no server You also do n’t have the ability to “step through” the code of a live running Lambda function today (althoug h you can do this with AWS SAM Local prior to deployment)56 For deployed functions y ou depend heavily on the logs you create to inform an investigation of function behavior Therefore it ’s especial ly important that the logs you do create find the right balance of verbosity to help track/triage issues as they occur without demanding too much additional compute time to create them We recommend that you make use of Lambda e nvironment variables to create a LogLevel variable that your function can refer to so that it can determine which log statements to create during runtime Appropriate use of log levels can ensure that you have the ability to selectively incur the additional compute co st and storage cost only during an operational triage Metrics and Monitoring Lambda just like other AWS services provides a number of CloudWatch metrics out of the box These include metrics related to the number of invocations a function has received the execution duration of a function and others It’s best practice to create alarm thresholds (high and low) for each of your Lambda functions on all of the provided metrics through CloudWatch A major change in how your function is invoked or how long i t takes to execute could be your first indication of a problem in your architecture For any additional metrics that your application needs to gather (for example application error codes dependency specific latency etc) you have two options to get those custom metrics stored in CloudWatch or your monitoring solution of choice: • Create a custom metric and integrate directly with the API required from your Lambda function as it ’s executing This has the fewest dependencies and will record the metric as fas t as possible However it does require you to spend Lambda execution time and resources integrating with another service dependency If you follow this path ensure that your code for captur ing metrics is modularized and reusable across your Lambda functi ons instead of tightly coupled to a specific Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 32 • Capture the metric within your Lambda function code and log it using the provided logging mechanisms in Lambda Then create a CloudWatch Logs metric filter on the function streams to extract th e metric and make it available in CloudWatch Alternatively create another Lambda function as a subscription filter on the CloudWatch Logs stream to push filtered log statements to another metrics solution This path introduces more complexity and is not as near realtime as the previous solution for capturing metrics However it allow s your function to more quickly create metrics through logging rather than making an external service request Deployment Performing a deployment in Lambda is as simple as uploading a new function code package publishing a new version and updating your aliases However these steps should only be pieces of your deployment process with Lambda Each deployment process is application specific To design a deployment process that avoids negatively disrupting your users or application behavior you need to understand the relationship between each Lambda function and its event sources and dependencies Things to consider are: • Para llel version invocations – U pdating an alias to point to a new version of a Lambda function happen s asynchronously on the service side There will be a short period of time that existing function containers containing the previous source code package will continue to be invoked alongside the new function version the alias has been updated to It’s important that your application continues to operate as expected during this process An artifact of this might be that any stack dependencies being decommissioned after a deployment ( for example database tables a message queue etc) not be decommissioned until after you’ve observed all invocations targeting the new function version • Deployment schedule – Performing a Lambda function deployment during a peak traffic time could result in more cold start times than desired You should always perform your function deployments during a low traffic period to minimize the immediate impact of the new/cold function containers being provisioned in the Lambda environment • Rollback – Lambda provide s details about Lambda function versions (for example created time incrementing numbers etc ) However it does n’t logically track how your application lifecycle has been using those versions If you need to roll back your Lambda function code it’s This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 33 important for your processes to roll back to the function version that was previously deployed Load Testing Load test your Lambda function to determine an optimum timeout value It ’s important to analyze how long your function runs so that you can better determine any problems with a dependency service that might increase the concurrency of the function beyond what you expect This is especially important when your Lambda function makes network calls to resources that may not handle Lambda’ s scaling Triage and Debugging Both logging to enable investigations and us ing XRay to profile application s are useful to operational triages Additionally consider creating Lambda function aliases that represent operational activities such as integration testing performance testing debugging etc It’s common for teams to build out test suites or segmented application stacks that serve an operational purpose You should build these operational artifacts to also integrate with Lamb da functions via aliases However keep in mind that aliases don’t enforce a wholly separate Lambda function container So an alias like PerfTest that points at function version number N will use the same function containers as all other aliases pointing at version N You should define appropriate versioning and alias updating processes to ensure separate containers are invoked where required Cost Optimization Best Practices Because Lambda charges are based on function execution time and the resources allocated optimizing your costs is focused on optimizing those two dimensions Right Sizing As covered in Performance Efficiency it’s an anti pattern to assume that the smallest resource size available to your function will provide the lowest total cost If your function’s resource size is too small you could pay more due to a longer execution time than if more resources were avai lable that allowed your function to complete more quickly See the section Choosing the Optimal Memory Size for more details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 34 Distributed and Asynchronous Architectures You don’t need to implement all use cases through a series of blocking/synchronous API requests and responses If you are able to design your application to be asynchronous you might find that each decoupled component of your architecture takes less compute time to conduct its work than tightly c oupled components that spend CPU cycles awaiting responses to synchronous requests Many of the Lambda event sources fit well with distributed systems and can be used to integrate your modular and decoupled functions in a more cost effective manner Batch Size Some Lambda event sources allow you to define the batch size for the number of records that are delivered on each function invocation ( for example Kinesis and DynamoDB) You should test to find the optimal number of records for each batch size so tha t the polling frequency of each event source is tuned to how quickly your function can complete its task Event Source Selection The variety of e vent sources available to integrate with Lambda means that you often have a variety of solution options availab le to meet your requirements Depending on your use case and requirements (request scale volume of data latency required etc) there might be a non trivial difference in the total cost of your architecture based on which AWS services you choose as the components that surround your Lambda function Serverless Development Best Practices Creating applications with Lambda can enable a development pace that you have n’t experienced before The amount of code you need to write for a working and robust serverle ss application will likely be a small percentage of the code you would need to write for a server based model But with a new application delivery model that serverless architectures enable there are new dimensions and constructs that your development pro cesses must make decisions about Things like organizing your code base with Lambda functions in mind moving code changes from a developer laptop into a production serverless environment and ensuring code quality through testing even though you can’t simulate the Lambda runtime environment or your event sources outside of AWS The following are some development centric best practices to help you work through these aspects of owning a serverless application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 35 Infrastructure as Code – the AWS Serverless Application Model (AWS SAM) Representing your infrastructure as code brings many benefits in terms of the auditability automatability and repeatability of managing the creation and modification of infrastructure Even though you don’t need to manage any infrastructure when building a serverless application many components play a role in the architecture : IAM roles Lambda functions and their configurations their event sources and other dependencies Representing all of these things in AWS CloudFormation natively would require a large amount of JSON or YAML Much of it would be almost identical from one serverless application to the next The AWS Serverless Application Model ( AWS SAM) enables you to have a simple r experience when building serverless applications and get the benefits of infrastructure as code AWS SAM is an open specification abstraction layer on top of AWS CloudFormation 57 It provides a set of command line utilities that enable you to define a full serverless application stack with only a handful of lines of JSON or YAML package your Lambda function code together with that infrastructure definition and then deploy them together to AWS We recommend u sing AWS SAM combined with AWS CloudFormation to define and make changes to your serverless application environment There is a distinction however between changes that occur at the infrastructure/environment level and application code changes occurring within existing Lambda functions AWS CloudFormation and AWS SAM aren’t the only tools required to build a deployment pipeline for your Lambda function code changes See the CI/CD section of this whitepaper for more recommendations about managing code changes for your Lambda functions Local Testing – AWS SAM Local Along with AWS SAM AWS SAM Local offers additional command line tools that you can add to AWS SAM to test your serverless functions and applications locally before deploy ing them to AWS58 AWS SAM Local uses Docker to enable you to quickly test yo ur developed Lambda functions using popular event sources ( for example Amazon S3 DynamoDB etc) You can locally test an API you define in your SAM template before it is created in API Gateway You can also validate the AWS SAM template that you created By enabling these capabilities to run against Lambda functions still residing within your developer workstation you can do things like view logs locally step through your code in a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 36 debugger and quickly iterate changes without having to deploy a new co de package to AWS Coding and Code Management Best Practices When developing code for Lambda functions there are some specific recommendations around how you should both write and organize code so that managing many Lambda functions does n’t become a complex task Coding Best Practices Depending on the Lambda runtime language you build with continue to follow the best practices already established for that language While the environment that surrounds how your code is invoked has changed wi th Lambda the language runtime environment is the same as anywhere else C oding standards and best practices still apply The following recommendations are specific to writing code for Lambda outside of those general best practices for your language of c hoice Business Logic outside the Handler Your Lambda function starts execution at the handler function you define within your code package Within your handler function you should receive the parameters provide d by Lambda pass those parameters to another function to parse into new variables/objects that are contextualized to your application and then reach out to your business logic that sits outside the handler function and file This enables you to create a code package that is as decoupled from the Lambda runtime environment as possible This will greatly benefit your ability to test your code within the context of objects and functions you’ve created and reuse the business logic you’ve written in other environments outs ide of Lambda The following example (written in Java ) shows poor practices where the core business logic of an application is tightly coupled to Lambda In this example the business logic is created within the handler method and depend s directly on Lamb da event source objects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 37 Warm Container s—Caching/Keep Alive/Reuse As mentioned earlier you should write code that take s advantage of a warm function container This means scoping your variables in a way that they and their contents can be reused on subsequent invocation s where possible This is especially impactful for things like bootstrapping configuration keeping exter nal dependency connections open or one time initialization of large objects that can persist from one invocation to the next Control Dependencies The Lambda execution environment contains many libraries such as the AWS SDK for the Nodejs and Python runt imes (For a full list see the Lambda Execution Environment and Available Libraries 59) To enable the latest set of features and security updates Lambda periodically update s these libraries These updates can introduce subtle changes to the behavior of your Lambda function To have full control of the dependencies your function uses we recommend packaging all your dependencies with your deployment package Trim Dep endencies Lambda function code package s are permitted to be at most 50 MB when compressed and 250 MB when extracted in the runtime environment If you are including large dependency artifacts with your function code you may need to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 38 trim the dependencies included to just the runtime essentials This also allow s your Lambda function code to be downloaded and installed in the runtime environment more quickly for cold starts Fail Fast Configure reasonably short timeouts for any external dependencies as well as a reasonably short overall Lambda function timeout Don’t allow your function to spin helplessly while waiting for a dependency to respond Because Lambda is billed based on the duration of your function execution you don’t want to incur higher charges than necessary when your function dependencies are unresponsive Handling Exceptions You might decide to throw and handle exceptions differently depending on your use case for Lambda If you ’re placing an API Gateway API in front of a Lambda function yo u may decide to throw an exception back to API Gateway where it might be transformed based on its contents into the appropriate HTTP status code and message for the error that occurred If you ’re building an asynchronous data processing system you might decide that some exceptions within your code base should equate to the invocation moving to the dead letter queue for reprocessing while other errors can just be logged and not placed on the dead letter queue You should evaluate what your decide failure behaviors are and ensure that you are creating and throwing the correct types of exceptions within your code to achieve that behavior To learn more about handling exceptions see the following for details about how exceptions are defined for each languag e runtime environment: • Java60 • Nodejs61 • Python62 • C#63 Code Management Best Practices Now that the code you’ve written for your Lambda functions follows best practices how should you manage that code? With the development speed that Lambda enables you might be able to complete code changes at a pace that is unfamili ar for your typical pro cesses And the reduced amount of code that serverless architectures require means that your Lambda function code This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 39 represents a large portion of what makes your entire application stack function So having good source code management of your Lambda function code will help ensure secure efficient and smooth change management processes Code Repository Organization We recommend that you organize your Lambda function source code to be very fine grained within your source code management solution of choice This usually means having a 1:1 relationship between Lambda functions and code repositories or repository projects (The lexicon differ s from one source code management tool to another ) However if you are following a strategy of creating separate Lambda f unctions for different lifecycle stages of the same logical function ( that is you have two Lambda functions one called MyLambdaFunction DEV and another called MyLambdaFunction PROD) it make s sense to have those separate Lambda functions share a code bas e (perhaps deploying from separate release branches) The main purpose of organizing your code this way is to help ensure that all of the code that contribute s to the code package of a particular Lambda function is independently versioned and committed to and define s its own dependencies and those dependencies’ versions Each Lambda function should be fully decoupled from a source code perspective from other Lambda functions just as it will be when it’s deployed You don’t want to go through the process of modernizing an application architecture to be modular and decoupled with Lambda only to be left with a monolithic and tightly coupled code base Release Branches We recommend that you create a repository or project branch ing strategy that enables you to correlate Lambda function deployments with incremental commits on a release branch If you don’t have a way to confidently correlate source code changes within your repository and the changes that have been deployed to a live Lambda function an operational investigation will always begin with trying to identify which version of your code base is the one currently deployed You should build a CI/CD pipeline (more recommendations for this later ) that allows you to correlate L ambda code package creation and deployment times with the code ch anges that have occurred with your release branch for that Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 40 Testing Time spent developing thorough testing of your code is the best way to ensure quality within a serverless architecture However serverless architectures will enforce proper unit testing practices perhaps more than you ’re used to Many developers use u nit test tools and frameworks to write tests that cause their code to also test its dependencies This is a si ngle test that combines a unit test and an integration test but that doesn’t perform either very well It’s important to scope all of your u nit test cases down to a single code path within a single logical function mocking all inputs from upstream and ou tputs from downstream This allows you to isolate your test cases to only the code that you own When writing unit tests you can and should assume that your dependencies behave properly based on the contracts your code has with them as APIs libraries etc It’s similarly important for your integration tests to test the integration of your code to its dependencies in an environment that mimics the live environment Testing whether a developer laptop or build server can integrate with a downstream dependency is n’t fully testing if your code will integrate successfully once in the live environment This is especially true of the Lambda environment where you code does n’t have ownership of the events that are going to be delivered by event sources and you do n’t have the ability to create the Lambda runtime environment outside of Lambda Unit Tests With what we’ve said earlier in mind we recommend that you u nit test your Lambda function code thoroughly focusing mostly on the business logic outside your handler function You should also unit test your ability to parse sample/mock objects for the event sources of your function However the bulk of your logic and tests should occur with mocked objects and functions that you have full control over within your code base If you feel that there are important things inside your h andler function that need to be unit tested it can be a sign you should encapsulate and externalize the logic in your handler function further Also to supplement the unit tests you’ve written you should create local test automation using AWS SAM Local that can serve as local end toend testing of your function code (note that this isn’t a replacement for unit testing) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 41 Integration Testing For integration tests we recommend that you create lower lifecycle versions of your Lambda functions where your code packages are deployed and invoked through sample events that your CI/CD pipeline can trigger and inspect the results of (Implementation depends on your application and architecture ) Continuous Delivery We recommend that you programmatically manage all of your serverless deployments through CI/CD pipelines This is because the speed with which you will be able to develop new features and push code changes with Lambda will allow you to deploy much more frequently Manual deployments combined with a need to deploy more frequently often result in both the manual process becoming a bottleneck and prone to error The capabilities provided by AWS CodeCommit AWS CodePipeline AWS CodeBu ild AWS SAM and AWS CodeStar provide a set of capabilities that you can natively combine into a holistic and automated serverless CI/CD pipeline (where the pipeline itself also has no infras tructure that you need to manage) Here is how each of these services play s a role in a well define d continuous delivery strategy AWS CodeCommit – Provides hosted private Git repositories that will enable you to host your serverless source code create a branching strategy that meets our recommendations (including f inegrained access control) and integrate with AWS CodePipeline to trigger a new pipeline execution when a new commit occurs in your release branch AWS CodePipeline – Defines the steps in your pipeline Typically a n AWS CodePipeline pipeline begins where your source code changes arrive Then you execute a build phase execute tests against your new build and perform a deployment and release of your build into the live environment AWS CodePipeline provides native integration options for each of these phases with other AWS services AWS CodeBuild – Can be used for the build state of your pipeline U se it to build your code execute unit tests and create a new Lambda code package Then integrate with AWS SAM to push your code package to Amazon S3 and push the new package to Lambda via AWS CloudFormation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 42 After your new version is published to your Lambda f unction through AWS CodeBuild you can automate your subsequent steps in your AWS CodePipeline pipeline by creating deployment centric Lambda functions They will own the logic for performing integration tests updating function aliases determining if immediate rollbacks are necessary and any other application centric steps needed to occur during a deployment for your application (like cache f lushes notification messages etc) Each one of these deployment centric Lambda functions can be invoked in sequence as a step within your AWS CodePipeline pipeline using the Invoke action For details on using Lambda within AWS CodePipeline see this documentation 64 In the end each application and organization has its own requirements for moving source code from repository to production The more automation you can introduce into this process the more agility you can achieve using Lambda AWS CodeStar – A unified user interface for creating a serverless application (and other types of applications) that helps you follow these best practices from the beginning When you create a new project in AWS CodeStar you automatically begin with a fully implemented and integrated continuous delivery toolchain (using AWS CodeCommit AWS CodePipeline and AWS CodeBuild services mentioned earlier ) You will also have a place where you can manage all aspects of the SDLC for your project including team member management issue tracking development deployment and operations For more information about AWS CodeStar go here 65 Sample Serverless Architectures There are a number of sample serverless architectures and instructions for recreating them in your own AWS account You can find them on GitHub 66 Conclusion Building serverless applications on AWS relieves you of the responsibilities and constraints that servers introduce Using AWS Lambda as your serverless logic layer enables you to build faster and focus your development efforts on what differentiates your application Alongside Lambda AWS provides additional serverless capabilities so that you can build robust performant event driven reliable secure and cost effective applica tions Understanding the capabilities and recomm endations described in this w hitepaper can help ensure your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 43 success when building serverless applications of your own To learn more on related topics see Serverless Computing and Applications 67 Contributors The fo llowing individuals and organizations contributed to this document: • Andrew Baird Sr Solutions Architect AWS • George Huang Sr Product Marketing Manager AWS • Chris Munns Sr Developer Advocate AWS • Orr Weinstein Sr Product Manager AWS 1 https://awsamazoncom/lambda/ 2 https://awsamazoncom/api gateway/ 3 https://awsamazoncom/s3/ 4 https://awsamazoncom/dynamodb/ 5 https://awsamazoncom/sns/ 6 https://awsamazoncom/sqs/ 7 https://awsamazoncom/step functions/ 8 https://docsawsa mazoncom/AmazonCloudWatch/latest/events/WhatIsCloud WatchEventshtml 9 https://awsamazoncom/kinesis/ 10 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 11 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 12 http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml 13 https://githubcom/awslabs/aws serverless workshops Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 44 14 https://awsamazoncom/blogs/compute/scripting languages foraws lambda running phpruby and go/ 15 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 16 https://githubcom/awslabs/aws sam local 17 http://docsawsamazoncom/lambda/latest/dg/limitshtml 18 http://docsawsamazoncom/lambda/latest/dg/API_CreateFunctionhtml 19 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 20 http://docsawsamazoncom/lambda/latest/dg/java programming modelhtml 21 http://docsawsamazoncom/lambda/latest/dg/programming modelhtml 22 http://docsawsamazoncom/lambda/latest/dg/python programming modelhtml 23 http://docsawsamazoncom/lambda/latest/dg/dotnet programming modelhtml 24 http://docsawsamazoncom/lambda/latest/dg/java logginghtml 25 http://docsawsamazoncom/lambda/latest/dg/nodejs prog model logginghtml 26 http://docsawsamazoncom/lambda/latest/dg/python logginghtml 27 http://docsawsamazoncom/lambda/latest/dg/dotnet logginghtml 28 http://docsawsamazoncom/lambda/latest/dg/programming model v2html 29 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 30 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 31 http://docsawsamazoncom/lambda/latest/dg/API_PublishVersionhtml 32 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 33 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 45 34 http://docsawsamazoncom/IAM/latest/UserGuide/access_policieshtml 35 http://docsawsamazoncom/IAM/latest/UserGuide/best practiceshtml 36 http://docsawsamazoncom/lambda/latest/dg/vpchtml 37 https://awsamazoncom/blogs/compute/robust serverless application design with awslambda dlq/ 38 http://d0awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 39 https://awsamazoncom/sdk forjava/ 40 https://awsamazoncom/cognito/ 41 http://docsawsamazoncom/amazondynamodb/latest/developerguide/speci fying conditionshtml 42 http://docsawsamazoncom/cognitoidentity/latest/APIReference/API_GetC redentialsForIdentityhtml 44 https://awsamazoncom/elasticache/ 45 http://docsawsamazoncom/lambda/latest/dg/env_variableshtml#env_enc rypt 46 http://docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 47 http://docsawsamazoncom/general/latest/gr/signature version 4html 48 http://docsawsamazoncom/apigateway/latest /developerguide/how to generate sdkhtml 49 http://docsawsamazoncom/apigateway/latest/developerguide/use custom authorizerhtml 50 http://docsawsamazoncom/apigateway/latest/developerguide/apigateway control access toapihtml 51 https://aw samazoncom/codepipeline/ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 46 52 http://docsawsamazoncom/lambda/latest/dg/vpchtml#vpc setup guidelines 53 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/WhatIs CloudWatchLogshtml 54 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 55 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 56 https://githubcom/awslabs/serverless application model 57 https://githubcom/awslabs/serverless application model 58 https://githubcom/awslabs/aws sam local 59 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 60 http://docsawsamazoncom/lambda/latest/dg/java exceptionshtml 61 http://docsawsamazoncom/lambda/latest/dg/nodejs progmode exceptionshtml 62 http://docsawsamazoncom/lambda/latest/dg/python exceptionshtml 63 http://docsawsamazoncom/lambda/latest/dg/dotnet exceptionshtml 64 http://docsawsamazoncom/codepipeline/latest/userguide/actions invoke lambda functionhtml 65 https://awsamazoncom/codestar/ 66 https://githubcom/awslabs/aws serverless workshops 67 https://awsamazoncom/serverless/
|
General
|
consultant
|
Best Practices
|
Serverless_Streaming_Architectures_and_Best_Practices
|
ArchivedServerless Streaming Architectures and Best Practices June 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product of ferings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 What is serverless computing and why use it? 1 What is streaming data? 1 Who Should Read this Document 2 Stream Processing Application Scenarios 2 Serverless St ream Processing 3 Three Patterns We’ll Cover 4 Cost Considerations of Server Based vs Serverless Architectures 4 Example Use case 6 Sensor Data Collection 6 Best Practices 8 Cost Estimates 8 Streaming Ingest Transform Load (ITL) 9 Best Practices 10 Cost Estimates 11 Real Time Analytics 12 Best Practices 15 Cost Estimates 16 Customer Case Studies 17 Conclusion 18 Contributors 19 Furth er Reading 18 Document Revisions 19 Appendix A – Detailed Cost Estimates 19 Common Cost Assum ptions 19 Appendix A1 – Sensor Data Collection 20 Appendix A2 – Streaming Ingest Transform Load (ITL) 23 Appendix A3 – Real Time Analytics 26 Archived Appendix B – Deploying and Testing Patterns 28 Common Ta sks 28 Appendix B1 – Sensor Data Collection 29 Appendix B2 – Streaming Ingest Transform Load (ITL) 32 Appendix B3 – Real Time Analytics 36 Archived Execu tive Summary Serverless computing allows you to build and run applications and services without thinking about servers This means you can focus on writing business logic instead of managing or provisioning infrastruct ure AWS Lambda our serverless compu te offering allows you to write code in discrete units called functions which are triggered to run by events Lambda will automatically run and scale your code in response to these events such as modifications to Amazon S3 buckets table updates in Amaz on DynamoDB or HTTP requests from custom applications AWS Lambda is also pay peruse which means you pay only for when your code is running Using a serverless approach allows you to build applications faster at a lower cost and with less on going man agement AWS Lambda and serverless architectures are wellsuited for stream processing workloads which are often event driven and have spiky or variable compute requirements Stream processing architectures are increasingly deployed to process high volume events and generate insights in near real time In this whitepaper we will explore three st ream processing patterns using a serverless approach For each pattern we’ll describe how it applies to a real world use case the best practices and consideration s for implementation and cost estimates Each pattern also includes a template which enables you to easily and quickly deploy these patterns in your AWS accounts ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 1 Introduction What is serverless computing and why use it? Serverless computing allows y ou to build and run applications and services without thinking about servers Serverless applications don't require you to provision scale and manage any servers You can build them for nearly any type of application or backend service and everything re quired to run and scale your application with high availability is handled for you Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes either in the cloud or onpremises This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable Serverless applications have three main benefits: No server management Flexible scali ng Automated high availability In this paper we will focus on serverless stream processing applications built with our serverless compute service AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compu te time you consume there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service all with zero administration Just upload your code and Lambda takes care of everything require d to run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app What is streaming data? Streaming Data is data that is generated continuously by thousands of data sources which typically send in the data records simultaneously and in small sizes (order of kilobytes) Streaming data includes a wide variety of data such as log files generated by mobile or web applications e commerce purchases ingame player activity information from social networks financial trading floors or geospatial services and telemetry from connected devices or instrumentation in data centers Streaming data can be processed in real time or near real time providing act ionable insights that respond to changing conditions and customer behavior quicker than ever before This is in contrast to the traditional database model where data is stored then processed or analyzed at a later time sometimes leading to insights deriv ed from data that is out of date ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 2 Who Should Read this Document This document is targeted at Architects and Engineers seeking for a deeper understanding of serverless patterns for stream processing and best practices and considerations We assume a workin g knowledge of stream processing For an intr oduction to Stream Processing please see to the Whitepaper: Streaming Data Solutions on AWS with Amazon Kinesis Stream Processing Applicati on Scenarios Streaming data processing is beneficial in most scenarios where new dynamic data is generated on a continual basis It applies to most big data use cases and can be found across diverse industry verticals as shown in Table 1 In this Whitepaper we ’ll focus on the Internet of Things (IoT) industry vertical to provide examples of how to apply stream processing architectures to real world challenges Scenarios/ Verticals Accelerated Ingest Transform Load Continuous M etrics Generation Responsive Data Analysis IoT Sensor device telemetry data ingestion Operational metrics and dashboards Device operational intelligence and alerts Digital Ad Tech Marketing Publisher bidder data aggregation Advertising metrics like coverage yield and conversion User engagement with ads optimized bid/buy engines Gaming Online data aggregation eg top 10 players Massively multiplayer online game (MMOG) live dashboard Leader board generation player skill match Consumer Online Clickstream analytics Metrics like impressions and page views Recommendation engines proactive care Table 1 Streaming Data Scenarios Across Verticals There are several characteristics of a stream processing or real time analytics wo rkload: It must be reliable enough to handle critical updates such as replicating the changelog of a database to a replica store like a search index delivering this data in order and without loss It must support throughput high enough to handle large vol ume log or event data streams It must be able to buffer or persist data for long periods of time to support integration with batch systems that may only perform their loads and processing periodically It must provide data with latency low enough for real time applications ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 3 It must be possible to operate it as a central system that can scale to carry the full load of the organization and operate with hundreds of applications built by disparate teams all plugged into the same central nervous system It has to support close integration with stream processing systems Serverless Stream Processing Traditionally stream processing architectures have used frameworks like Apache Kafka to ingest and store the data and a technology like Apache Spark or Storm to pr ocess the data in near real time These software components are deployed to clusters of servers along with supporting infrastructure to manage the clusters such as Apache ZooKeeper Today companies taking advantage of the public cloud no longer need to pu rchase and maintain their own hardware However any server based architecture still requires them to architect for scalability and reliability and to own the challenges of patching and deploying to those server fleets as their applications evolve Moreove r they must scale their server fleets to account for peak load and then attempt to scale them down when and where possible to lower costs —all while protecting the experience of end users and the integrity of internal systems Serverless compute offerings like AWS Lambda are designed to address these challenges by offering companies a different way of approaching application design – an approach with inherently lower costs and faster time to market that eliminates the complexity of dealing with servers at a ll levels of the technology stack Eliminating infrastructure and moving to a per payrequest model offers dual economic advantages: Problems like cold servers and underutilized storage simply cease to exist along with their cost consequences —it’s simply impossible for a serverless compute system like AWS Lambda to be cold because charges only accrue when useful work is being performed with millisecond level billing granularity The elimination of fleet management including the security patching deplo yments and monitoring of servers disappears along with the challenge of maintaining the associated tools processes and on call rotations required to support 24x7 server fleet uptime Without the burden of server management companies can direct their s carce IT resources to what matters —their business With greatly reduced infrastructure costs more agile and focused teams and faster time to market companies that have already adopted serverless approaches are gaining a key advantage over their competi tors ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 4 Three Patterns We’ll Cover In this whitepaper we will consider three serverless stream processing patterns: Sensor Data Collection with Simple Transformation – in this pattern IoT sensor devices are transmitting measurements into a ingest service As data is ingested simple transformations can be performed to make the data suitable for downstre am processing Example use case s: medical sensor devices generate patient data streams that m ust be de identified to mask Protected Health Information (PHI ) and Personally Identifiable Information ( PII) to meet HIPAA compliance Stream Ingest Transform Load (ITL) – this pattern extends the prior pattern to add field level enrichment from relatively small and static data sets Example use case (s): add data f ields to medical device sensor data such as location information or device details looked up from a database This is also a common pattern used for log data enrichment and transformation Real time Analytics – this pa ttern builds upon the prior pattern s and adds the computation of windowed aggregations and anomaly detection Example use case (s): tracking user activity performing log analytics fraud detection recommendation engines and maintenance alerts in near realtime In the sections that follow we will provide an example use case of each pattern We will discuss the implementation choices and provide an estimate of the costs Each sample pattern described in the paper is also available in Github ( please see Appen dix B ) so you can quickly and easily deploy them into your AWS account Cost Considerations of Server Based vs Serverless Architectures When comparing the cost of a serverless solution against server based approaches you must consider several indirect cost elements that are i n addition to the server infrastructure costs These indirect costs include additional patching monitoring and other responsibilities of maintaining server based applications that can require additional resources to manage A number of these cost consid erations are listed in Table 2 ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 5 Cost Consideration Server based architectures Serverless architectures Patching All servers in the environment must be regularly patched; this includes the Operating System (OS) as well as the su ite of applications needed for the workload to function As there are no servers to manage in a serverless approach these patching tasks are largely absent You are only responsible for updating your function code when using AWS Lambda Security stack Servers will often include a security stack including products for malware protection log monitoring host based firewalls and IDS that must be configured and managed Equivalent firewall and IDS controls are largely taken care of by the AWS service and ser vice specific security logs such as CloudTrail are provided for auditing purposes without requiring setup and configuration of agents and log collection mechanisms Monitoring Server based monitoring may surface lower level metrics that must be monitored correlated and translated to higher service level metrics For example in a stream ingestion pipeline individual server metrics like CPU utilization network utilization disk IO disk space utilization must all be monitored and correlated to understand the performance of the pipeline In the serverless approach each AWS service provides CloudWatch metrics that can be directly used to understand the performance of the pipeline For example: Kinesis Firehose publishes CloudWatch metrics for IncomingBytes IncomingRecords and S3 DataFreshness that lets an operator understand more directly the performance of the streaming application Supporting infrastructure Often server based clusters need supporting infrastructure such as cluster management software centralized log collection that must also be managed AWS manages the clusters providing AWS services and removes this burden from the customer Further services like AWS Lambda deliver log records to CloudWatch Logs allowing centralized log collection p rocessing and analysis Software licenses Customers must consider the cost of licenses and commercial support for software such as the Operating Systems streaming platforms application servers and packages for security management and monitoring The AWS service prices include software licenses and no additional packages are needed for security management and monitoring of these services Table 2 Cost considerations when comparing serverless and server based architectures ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 6 Example Use case For this whitepaper we will focus on a use case of medical sensor devices that are wired to a patient receiving treatment at a hospital First sensor data must be ingested securely at scale Next the patient’s protected health information (PHI) is de identified in order to be processed in an anonymized way As part of the processing the data may need to be enriched with additional fields or the data may be transformed Finally the sensor data is analyzed in real time to derive insights s uch as detecting anomalies or developing trend patterns In the sections that follow we’ll detail this use case with example realizations of the three patterns Sensor Data Collection Wearable devices for health monitoring is a fast growing IoT use case that allow real time monitoring of a patient’s health In order to do this first the sensor data must be ingested securely and at scale It must then be de identified to remove the patient’s personal health information (PHI) so that the anonymized data c an be processed in other systems downstream An example solution that meets these requirements is shown in Figure 1 Figure 1 Overview of Medical Device Use Case – Sensor or Device Data Collection In Point 1 of Figure 1 one or more medical devices (“IoT sensors”) are wired to a patient in a hospital The devices transmit sensor data to the hospital IoT gateway which are then forwarded securely using the MQTT protocol to the AWS IoT gateway service for processing A sample r ecord at this point is: IoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Store KMS: Encryption KeysEncryptMQTT1 2 Deidentified recordsArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 7 { "timestamp" : "20180127T05:11:50" "device_id" : "device8401" "patient_id" : "patient2605" "name": "Eugenia Gottlieb" "dob": "08/27/1977" "temperature" : 1003 "pulse": 1086 "oxygen_percent" : 484 "systoli c": 1102 "diastolic" : 756 } Next the data must be de identified in order to be processed in an anonymized way AWS IoT is configured with an IoT Rule that selects measurements for a specific set of patients and an IoT Action that delivers these sele cted measurements to a Lambda de identification function The Lambda performs three tasks First the function removes PHI and PII attributes (Patient Name and Patient DOB) from the records Second f or the purpose of future cross reference the function enc rypts and stores the Patient Name and Patient DOB attributes in a DynamoDB table along with the Patient ID And finally the function sends the de identified records to a Kinesis Data Firehose delivery stream (Point 2 in Figure 1) A sample record at this point is shown below – note that the date of birth (“dob”) and “name” fields are removed: { "timestamp" : "20180127T05:11:50" "device_id" : "device8401" "patient_id" : "patient2605" "temperature" : 1003 "pulse": 1086 "oxygen_percent" : 484 "systolic" : 1102 "diastolic" : 756 } ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 8 Best Practices Consider the following best practices when deploying this pattern: Separate the Lambda Handler entry point from the core logic This allows you to make a more unit testable function Take advantage of container re use to i mprove the performance of your Lambda f unction Make sure any externalized configuration or dependencies that your code retrieves are stored and referenced locally after initial execution Limit the r einitialization of variables/objects on every invocation Instead use static initialization/constructor global/static variables and singletons When delivering data to S3 tune the Kinesis Data Firehose buffer size and buffering interval to achieve the desired object size With small objects the cost of PUT and GET actions on the object will be higher Use a compression format to further reduce storage and data transfer costs Kinesis Data Firehose supports GZIP Snappy and Zip data compression Cost E stimates The monthly cost of the AWS services from the ingestion of the sensor data into AWS IoT Gateway de identification in a Lambda function and storing cross reference data into DynamoDB Table can be $11719 for the s mall scenario $113201 for the m edium scenario and $497799 for the large scenario Please refer to Appendix A1 – Sensor Data Collection for a detailed breakdown of the costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 9 Stream ing Ingest Transform Load (ITL) After sensor data ha s been ingested it may need to be enriched or modified with simple transformations such as field level substitutions and data enrichment from relatively small and static data sets In the example use case it may be important to associate sensor measurem ents with information on the device model and manufacturer A solution to meet this need is shown i n Figure 3 Deidentified records from the prior pattern are ingested into a Kinesis Data Firehose Delivery Stream (Point 2 in Figure 2) Figure 2 Overview of Medical Device Use Case – Stream Ingest Transform Load (ITL) The solution introduces a Lambda function that is invoked by Kinesis Data Firehose as records are received by the delivery stream The Lambda function looks up information about each device from a DynamoDB table and adds these as fields to the measurement records Firehose then buffers and sends the modified records to the configured destinations (Point 3 in Figure 2) A copy of the source records is saved in S3 as a backup and for future analysis A sample record at this point is shown below with the enriched fields highlighted: S3: Buffered Files Raw records Enriched recordsEnriched records Firehose Delivery StreamIoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Enrichment Lookup TablesLookup Store KMS: Encryption KeysEncryptMQTT1 2 Deidentified records 3ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 10 { "timestamp" : "20180127T05:11:50" "device_id" : "device8401" "patient_id" : "patient2605" "temperature" : 1003 "pulse": 1086 "oxygen_percent" : 484 "systolic" : 1102 "diastolic" : 756 "manufacturer" : "Manufacturer 09" "model": "Model 02" } Using AWS Lambda functions for transformations in this pattern removes the conventional hassle of setting up and maint aining infrastructure Lambda runs more copies of the function in parallel in response to concurrent transformation invocations and scales precisely with the size of the workload down to the individual request As a result the problem of idle infrastruct ure and wasted infrastructure cost is eliminated Once data is ingested into Firehose a Lambda function is invoked that performs simple transformations : Replace the numeric timestamp information with a human readable string that allows us to query the dat a based on day month or year Eg the timestamp “1508039751778 ” is converted to the timestamp string “ 2017 10 15T03:55:51778000” Enrich the data record by querying a table (stored in DynamoDB) using the Device ID to get the corresponding Device Manufac turer and Device Model The function caches the device details in memory to avoid having to query DynamoDB frequently and reduce the number of Read Capacity Units (RCU) This design takes advantage of container reuse in AWS Lambda to opportunistically cache data when a container is reused Best Practices Consider the following best practices when deploying this pattern: When delivering data to S3 tune the Kinesis Data Firehose buffer size and buffer interval to achieve your desired object size With small objects the cost of object actions – PUTs and GETs – will be higher ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 11 Use a compression format to reduce your storage and data transfer costs Kinesis Data Firehose supports GZIP Snappy or Zip data compression When delivering data to Redshift consider the best practices for loading data into Redshift When transforming data in the Firehose delivery stream using an AWS Lambda function consider enabling Source Record Backup for the delivery stream This feature backs up all untransformed records to S3 while delivering transformed records to the destinations Though this increases yo ur storage size on S3 this backup data can come in handy if you have an error in your transformation lambda function Firehose will buffer records up to the buffer size or 3MB whichever is smaller and invoke the transformation Lambda function with each buffered batch Thus the buffer size determines number of Lambda function invocations and the amount of work sent in each invocation A small buffer size means a large number of Lambda function invocations and a larger invocation cost A large buffer size means fewer invocations but more work per invocation and depending on the complexity of the transformation the function may exceed the 5 minute maximum invocation duration The lookup during the transformation happens at the rate of ingest record rates Consider using Amazon DynamoDB Accelerator (DAX) to cache results to reduce the latency for lookups and increase lookup throughput Cost Estimates The monthly cost of the AWS services from the ingestion of the streaming data into Kinesis Data Firehose tran sformations in a Lambda function and delivery of both the source records and transformed records into S3 can be as little as $1811 for the Small scenario $13816 for the Medium scenario and $67206 for the Large scenario Please refer to Appendix A2 – Streaming Ingest Transform Load (ITL) for a detailed breakdown of the costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 12 RealTime Analytics Once streaming data is ingested and enriched it can now be analyzed to derive insights in real time In the example u secase the de identified and enriched records needs be analyzed in real time to detect anomalies with any of the devices in the hospital and notify the appropriate device manufacturers By assessing the condition of the devices the manufacturer can star t to spot patterns that indicate when a failure is likely to arise In addition by monitoring information in near real time the hospital provider can quickly react to concern s before anything goes wrong Should an anomaly is detected the devices are imm ediately pulled out and sent for inspection The benefits of this approach include a reduction in device downtime increased device monitoring lower labor costs and more efficient maintenance scheduling This also allows the device manufacturer s to start offering hospitals more performance based maintenance contracts A solution to meet this requirement is shown in Figure 3 Figure 3 Overview of Medical Device Use Case – Real Time Analytics S3: Buffered Files Raw records Enriched recordsEnriched records Firehose Delivery StreamIoT sensors AWS IoTIoT rule IoT action Deidentification DynamoDB: CrossReference Data Enrichment Lookup TablesLookup Store KMS: Encryption KeysEncrypt Kinesis Analytics: Anomaly DetectionKinesis Data Stream: Anomaly ScoresAlerting SNS: SMS AlertsMQTT1 2 Deidentified records 3 4 Anomaly ScoresArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 13 Copies of the enriched records from the prior patter n (Point 4 in Figure 3 ) are delivered to a Kinesis Data Analytics application that detects anomalies in the measurements across all devices for a manufactu rer The anomaly scores (P oint 5 in Figure 3 ) are sent to a Kinesis Data Stream and processed by a Lambda function A sample record with the added anomaly score is shown below: { "timestamp": "2018 0127T05:11:50" "device_id": "device8401" "patient_id": "patient2605" "tempera ture": 1003 "pulse": 1086 "oxygen_percent": 484 "systolic": 1102 "diastolic": 756 "manufacturer": "Manufacturer 09" "model": "Model 02" "anomaly_score": 09845 } Based on a range or threshold of anomalies detected the Lambda fun ction sends a notification to the manufacturer with the model number and device id and a set of measurements that caused the anomaly The Kinesis Analytics application code consists of an anomaly detection pre built function RANDOM_CUT_FOREST This funct ion is the crux of the anomaly detection The function takes the numeric data in the message in our case "temperature" "pulse" "oxy gen_percent" "systolic" and "diastolic" to determine the anomaly score To learn more on the function RANDOM_CUT_FOREST you read the amazon kinesis analytics document https://docsawsamazoncom/kinesisanalytics/latest/sqlref/sqlrf random cut foresthtml The following i s an example of anomaly detection The diagram shows three clusters and a few anomalies randomly interjected The red squares show the records that received the highest anomaly score according to the RANDOM_CUT_FOREST function The blue diamond represent the remaining records Note how the highest scoring records tend to be outside the clusters ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 14 Figure 4 Example of anomaly detection Below is the Kinesis Analytics Application code The first block of the code is to store the output of the anomaly score generated by the RANDOMM_CUT_FOREST function The block of code uses the incoming sensor data stream (“STRAM_PUMP”) to call the pre built anomaly detection function RANDOM_CUT_FOREST Creates a temporary stream and defines a sc hema CREATE OR REPLACE STREAM "TEMP_STREAM" ( "device_id" VARCHAR(16) "manufacturer" VARCHAR(16) "model" VARCHAR(16) "temperature" integer "pulse" integer "oxygen_percent" integer "systolic" integer "diastolic" integer "ANOMALY_SCORE" DOUBLE); Compute an anomaly score for each record in the source stream using Random Cut Forest CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "TEMP_STREAM" SELECT STREAM "device_id" "manufacturer" "model" "temperature" "pulse" "oxygen_percent" "systolic" "diastolic" "ANOMALY_SCORE" FROM ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 15 TABLE(RANDOM_CUT_FOREST ( CURSOR(SELECT STREAM "device_id" "manufacturer" "model" "temperature" "pulse" "oxygen_percent" "systolic" "diastolic" FROM "SOURCE_SQL_STREAM _001") ) ); The post processing Lambda function in this use case performs the following simple tasks on the analytics data records with the anomaly scores: The Lambda function uses two environment variable s called ANOMALY_THRESHOLD_SCORE and SNS_TOPIC_A RN The environment variable ANOMALY_THRESHOLD_SCORE you need to set after running initial testing using controlled data to determine the appropriate value to set The SNS_TOPIC_ARN is the SNS Topic to which the lambda function will deliver the anomaly rec ords The Lambda function iterates through the a batch of analytics data records looking at the anomaly score and find records that has an anomaly score that exceeds the threshold The L ambda function then publishes the threshold records to the SNS Topic d efined in the environment variable In your deployment script referred in the section Appendix B3 under Package and Deploy you will set the variable NotificationEmailAddress for your e mail that will be used to subscribe to the SNS Topic The sensor data is also sto red into S3 making the data available for all sorts of future analysis by different data scientists working on different domains The stream sensor data is passed to a Kinesis Firehose Delivery stream where it is buffered and zipped before doin g a PUT operation into S3 Best Practices Consider the following best practices when deploying this pattern: Setup Amazon CloudWatch Alarms Using the CloudWatch metrics that Amazon Kinesis Data Analytics provides: Input bytes and input records (number of bytes and records entering the application) Output bytes output record and MillisBehindLatest ( tracks how far behind the application is in reading from the streaming source) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 16 Defining Input Schema Adequately test the inferred schema The discovery process uses only a sample of records on the streaming source to infer a schema If your streaming source has many record types there is a possibility that the discovery API missed sampling one or more record types which can result in a schema that does not accurately reflect data on the streaming source Connecting To Outputs We recommend that every application have at least two outputs Use the first de stination to insert the results of your SQL queries Use the second destination to insert the entire error stream and send it to an S3 bucket through an Amazon Kinesis Firehose delivery stream Authoring Application Code: During development keep window s ize small in your SQL statements so that you can see the results faster When you deploy the application to your production environment you can set the window size as appropriate Instead of a single complex SQL statement you might consider breaking it into multiple statements in each step saving results in intermediate in application streams This might help you debug faster When using tumb ling windows we recommend that you use two windows one for processing time and one for your logical time (ingest time or event time) For more information see Timestamps and the ROWTIME Column Cost Estimates The monthly cost of the AWS services for doing the anomaly detection in Kinesis Analytics reporting of the anomaly score using the lambda function to an SNS Topic and storing the anomaly score data to a n S3 bucket for future analysis can be $70581 for the Small scenario $81709 for the Medium scenario and $131205 for the Large scenario Please refer to Appendix A3 – Real Time Analytics for a detailed breakdown of t he costs per service ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 17 Customer Case Studies Customers of different sizes and across different business segments are using a serverless approach for data processing and analytics Below are some of their stories To see more serverless case studies and cus tomer talks go to our AWS Lambda Resources page Thomson Reuters is a leading source of information —including one of the world’ s most trusted news organizations —for the world’s businesses and professionals In 2016 Thomson Reuters decided to build a solution that would enable it to capture analyze and visualize analytics data generated by its offerings providing insights to he lp product teams continuously improve the user experience This solution called Product Insights ingests and delivers data to a streaming data pipeline using AWS Lambda and Amazon Kinesis Streams and Amazon Kinesis Data Firehose The data is then piped into permanent storage or into an Elasticsearch cluster for real time data analysis Thomson Reuters can now process up to 25 billion events per month Read the case study » iRobot is a leading global consumer robot company designs an d builds robots that empower people to do more both inside and outside the home iRobot created the home cleaning robot category with the introduction of its Roomba Vacuuming Robot in 2002 Today iRobot reports that connected Roomba vacuums operate in mor e than 60 countries with total sales of connected robots projected to reach more than 2 million by the end of 2017 To handle such scale at a global level iRobot implemented a completely serverless architecture for its mission critical platform At the h eart of this solution is AWS Lambda AWS IoT Platform and Amazon Kinesis With serverless iRobot is able to kee p the cost of the cloud platform low and manage the solution with fewer than 10 people Read the case study » ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 18 Nextdoor is a free private social network for neighborhoods The Systems team at Nextdoor is responsible for managing the data ingestion pipeline which services 25 billion syslog and tracking events per day As the data volumes grew keeping the data ingestion pipeline stable became a f ull time endeavor that distracted the team from core responsibilities like developing the product Rather than continue running a large infrastructure to power data pipelines Nextdoor decided to implement a serverless ETL built on AWS Lambda See Nextdoor’s 2017 AWS re:Invent talk to learn more about Nextdoor’s serverless solution and how you can leverage Nextdoor scale serverless ETL through their open source project Bender Hear the Nextdoor talk » Conclusion Serverless computing eliminates the undifferentiated heavy lifting associated with building and managing server infrastruc ture at all levels of the technology stack and introduces a pay perrequest billing model where there are no more costs from idle compute capacity With data stream processing you can evolve your applications from traditional batch processing to realtime analytics which allows you to extract deeper insights on how your business performs In this whitepaper we reviewed how by combining these two powerful concepts developers can work with a clean application model that helps them deliver complex data proce ssing applications faster and organizations to only pay for useful work To learn more about serverless computing visit our page Serverless Computing and Applications You can also see more resources cus tomer talks and tutorials on our Serverless Data Processing page Further R esources For more serverless data processing resources including tutoria ls documentation customer case studies talks and more visit our Serverless Data Processing Page For more resources on serverless and AWS Lambda please see the AWS Lambda Resources page Read related whitepapers about serverless computing and data processing: Streaming Data Solutions on AWS with Amazon Kinesis Serverless: Changing the Face of Business Economics Optimizin g Enterprise Economics with Serverless Architectures ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 19 Contributors The following individuals and organizations contributed to this document: Akhtar Hossain Sr Solutions Archit ect Global Life science Amazon Web Services Maitreya Ranganath Solutions A rchitect Amazon Web Services Linda Lian Product Marketing Manager Amazon Web Services David Nasi Product Manager Amazon Web Services Document Revisions Date Description Month YYYY Brief description of revisions Month YYYY First publication Appendix A – Detailed Cost Estimates In this Appendix we provide the detailed costs estimates that were summarized in the main text Common Cost Assumptions We estimate the monthly cost of the resources required to implement each pattern for three traffic scenario s: Small – peak rate of 50 records / second average 1 KB per record Medium – peak rate of 1000 records / second average 1 KB per record Large – peak rate of 5000 records / second average 1 KB per record We assume that there are 4 peak hours in a day whe re records are ingested at the peak rate for the scenario In the rest of the 20 hours the rate falls to 20% of the peak data rate This is a simple variable rate model used to estimate the volume of data ingested monthly ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 20 Appendix A1 – Sensor Data Coll ection The detailed monthly cost of the Sensor Data Collection pattern is estimated in Table 3 below The services are configured as follows: AWS IoT Gateway Service Connectivity per day is assumed at 25%/day for the small use case 50%/day for the medium use case and 70%/day for the large use case Kinesis Firehose buffer size is 100MB Kinesis Firehose buffer interval is 5 minutes (300 seconds) Small Medium Large Peak Rate (Messages/Sec) 100 1000 5000 Record Size (KB) 1 1 1 Daily Records (Numbers) 2880000 28800000 144000000 Monthly Records (Numbers) 86400000 864000000 4320000000 Monthly Volume (KB) 86400000 864000000 4320000000 Monthly Volume (GB) 8239746094 8239746094 4119873047 Monthly Volume (TB) 008046627 0804662704 4023313522 (Table continued on next page) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 21 AWS IoT Costs No of Devices 1 1 1 Connectivity Percentage Time / day 25 50 75 Messaging (Number of Msgs/day) 2880000 28800000 144000000 Rules Engine (Number of Rules) 1 1 1 Device Shadow 0 0 0 Device Registry 0 0 0 Total Cost Based on AWS IoT Cor e Calculator $11200 $112300 $495200 Amazon Kinesis Firehose Delivery Stream Record Size Rounded Up to 5 KB 5 5 5 Monthly Volume for Firehose(KB) 432000000 4320000000 21600000000 Monthly Volume for Firehose(GB) 4119873047 4119873047 2059936523 Firehose Monthly Cost 1194763184 1194763184 5973815918 Amazon Dynamo DB RCU 1 1 1 WCU 10 10 10 Size (MB) 1 1 1 RCU Cost 00936 00936 00936 WCU Cost 468 468 468 Size Cost 0 0 0 DynamoDB Monthly Cost 47736 47736 47736 ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 22 AWS Key Management Service (KMS ) Cost Monthly Record Number 86400000 864000000 4320000000 Number of Encryption Request 20000 free 86380000 863980000 4319980000 Encryption Cost 25914 259194 1295994 KMS Monthly Cost 25914 259194 1295994 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164692419 824812095 Memory(MB) 1536 1536 1536 Memory Duration (GB/Sec) 2474435 24744363 123721814 Lambda Monthly Cost 042 424 2122 Estim ated Total Monthly Cost $38828 $384343 $1853532 Table 3 Sensor data collection details of estimated costs ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 23 Appendix A2 – Streaming Ingest Transform Load (ITL) The detailed monthly cost of the Streaming Ingest Transform Load (ITL) pattern is estimated in Table 4 below The services are configured as follows: Kinesis Firehose buffer size is 100MB Kinesis Firehose buffer interval is 5 minutes (300 seconds) Buffered records are stored in S3 compressed using GZIP assuming 1/4 compression ratio ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 24 Small Medium Large Peak Rate (records/second) 100 1000 5000 Record Size (KB) 1 1 1 Amazon Kinesis Firehose Monthly Volume (GB) (Note 1) 411987 411987 2059937 Kinesis Monthly Cost $1195 $11948 $59738 Amazon S3 Source Record Backup Storage (GB) 2102 21022 105108 Transformed Records Storage (GB) 2102 21022 105108 PUT API Calls (Note 2) 17280 17280 84375 S3 Monthly Cost $247 $2387 $11935 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164962419 8248120 95 Function Memory (MB) 1536 1536 1536 Memory Duration (GB seconds) 2474436 24744363 123721814 Lambda Monthly Cost (Note 3) $042 $424 $2122 Amazon DynamoDB Read Capacity Units (Note 4) 50 50 50 DynamoDB Monthly Cost $468 $468 $468 Total Monthly Cost $1811 $13816 $67206 Table 4 Streaming Ingest Transform Load (ITL) details of estimated costs Notes: 1 Kinesis Firehose rounds up the record size to the nearest 5KB In the three scenarios above each 1KB record is rounded up to 5KB when calculating the monthly volume ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 25 2 S3 PUT API calls were estimated assuming one PUT call per S3 object created by the Firehose delivery stream At low record rates the number of S3 objects is determined by the Firehose buffer duration (5 minutes) At high r ecord rates the number of S3 objects is determined by the Firehose buffer size (100MB) 3 The AWS Lambda free tier includes 1M free requests per month and 400000 GB seconds of compute time per month The monthly cost estimated above is before the free tier is applied 4 The DynamoDB Read Capacity Units (RCU) estimated above were the result of caching lookups in memory and taking advantage of container reuse This meant that the number of RCU required on the Table is reduced ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 26 Appendix A3 – RealTime Analyti cs The detailed monthly cost of the Real Time Analytics pattern is estimated in the Table 5 below Small Medium Large Peak Rate (Messages/Sec) 100 1000 5000 Record Size (KB) 1 1 1 Daily Records (Numbers) 2880000 28800000 144000000 Monthly Records (Num bers) 86400000 864000000 4320000000 Monthly Volume (KB) 86400000 864000000 4320000000 Monthly Volume (GB) 8239746094 8239746094 4119873047 Monthly Volume (TB) 008046627 0804662704 4023313522 Amazon Kinesis Analytics Peak Hours in a day (hr s) 4 4 4 Average Hours in a day (hrs) 20 20 20 Kinesis Processing Unit (KPU)/hr Peak 2 2 2 Kinesis Processing Unit (KPU)/hr Avg 1 1 1 Kinesis Analytics Monthly Cost $69240 $69240 $69240 Amazon Kinesis Firehose Delivery Stream Record Size Rounded Up to 5 KB 5 5 5 Monthly Volume for Firehose (KB) 432000000 4320000000 21600000000 Monthly Volume for Firehose(GB) 4119873047 4119873047 2059936523 Kinesis Firehose Monthly Cost 1194763184 1194763184 5973815918 (Table continued on next page) ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 27 Small Medium Large Amazo n S3 S3 PUTs per Month based on Size only 84375 84375 84375 S3 PUTs per Month based on Time only 8640 8640 8640 Expected S3 PUTs (max of size & time) 8640 8640 8640 Total Puts (source backup + Analytics data) 17280 17280 17280 Analytics Data Compressed (GB) 2102150444 2102 2102 Source Data Compressed (GB) 2102 2102 2102 Source Record Backup 048346 048346 048346 PUTs 00864 00864 00864 Analytics Data Records 0483494602 048346 048346 S3 Monthly Cost 1053354602 105332 105332 AWS Lambda Invocations 59715 597149 2985745 Duration (ms) 16496242 164692419 824812095 Memory(MB) 1536 1536 1536 Memory Duration (GB/Sec) 2474435 24744363 123721814 Lambda Monthly Cost 042 424 2122 Estimated Tota l Monthly Cost $70582 $81717 $131205 Table 5 Real time analytics details of estimated costs ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 28 Appendix B – Deploying and Testing Patterns Common Tasks Implementation details of the three patterns are described in the following sections Each patter n can be deployed ran and tested independently of the other patterns To deploy each pattern we provide links to the AWS Serverless Application Model (AWS SAM) template that can be deployed to any AWS Region AWS SAM extends AWS CloudFormation to provide a simplified syntax for defining the Amazon API Gateway APIs AWS Lambda functions and Amazon DynamoDB tables needed by your serverless application The solutions for three patterns can be downloaded from the public GitHub repo below: https://githubcom/aws samples/aws serverless stream ingest transform load https://githubcom/aws samples/aws serverless realtime analytics https://githubcom/awslabs/aws serverless sensor data collection Create or Identify an S3 Bucket for Artifacts To use the AWS Serverless Application Model (SAM) you need an S3 bucket where your code and template artifacts are uploaded If you already have a suitable bucket in your AWS Account you can simply note the S3 bucket name and skip this step If you instead choose to create a new bucket then you can follow the steps below: 1 Log into the S3 console 2 Choose Create Bucket and type a bucket name Ensure that the name is globally unique – we suggest a name like <random string> stream artifacts Choose the AWS Region where you want to deploy the pattern 3 Choose Next on the following pages to accept the defaults On the last page choose Create Bucket to create the bucket Note the name of the bucket as you’ll need it to deploy the three patterns below Create an Amazon Cognito User for Kinesis Data Generator To simulate hospital devices to test the Streaming Ingest Transform Load (ITL) and Real Time Analytics patterns you will use the Amazon Kinesis Data Generato r (KDG) tool You can learn more about the KDG Tool in this blog post ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 29 You can access the Amazon Kinesis Data Generator here Click on the H elp menu and follow the instructions to create a Cognito username and password that will use to log into the KDG Appendix B1 – Sensor Data Collectio n In this section we will describes how you can deploy the use case into your AWS Account and then run and test Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘ SAM For SesorDataCollectionyaml ’ by opening the file i n an editor of your choice You can use Notepad++ which renders the JSON file nicely This template creates the following resources in your AWS Account: An S3 Bucket that is used to store the De Identified records A Firehose Delivery Stream and associate d IAM Role used to buffer and collect the De Identified records compressed in a zip file and stored in the S3 Bucket An AWS Lambda Function that performs the De Identification of the incoming messages by removing the PHI/PII Data The function also stores the PHI / PII Data into DynamoDB along with PatientID for cross reference The PHI / PII data are encrypted using AWS KMS Keys An AWS Lambda Function that does hospital Device Simulation for the use case The Lambda function uses generates sensor simulat ion data and publishes to IoT MQTT Topic A DynamoDB table that stores encrypted cross reference data Patient ID Timestamp Patient Name and Patient Date of Birth Package and Deploy Follow the following steps to package and deploy the Sensor Data Collect ion scenario: 1 Clone and download the files from the GitHub folder here to a folder on your local machine On your local machine make sure you have the following files: 11 DeIdentificationzip 12 PublishIoTDatazip 13 SAM ForSesorDataCollectionyaml 14 deploy ersensordatacollectionsh 2 Create an S3 Deployment Bucket in the AWS Region where you intend to deploy the solution Note down the S3 bucket name You will need the S3 bucket name later ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 30 3 From your local machine u pload the foll owing lambda code zip files into the S3 Deployment Bucket you just created in Step 2: 31 DeIdentificationzip 32 PublishIoTDatazip 4 In the AWS Management console launch an ec2 Linux instance that will be used to run the CloudFormation template Launch an ec2 instance of type Amazon Linux AMI 2018030 (HVM) SSD Volume Type (t2macro) ec2 in the AWS Region where you want to deploy the solution Make sure you enable SSH access to the instance For details on how to launch an ec2 instance and enable SSH access see https://awsamazoncom/ec2/getting started/ 5 On your local machine open the deployer sensordatacollectionsh file in a text editor and update the three variables indicated as PLACE_HOLDER – S3ProcessedDataOutputBucket (the S3 bucket Name where the Processed Output Data will be stored) LamdaCodeUriBucket (the S3 Bucket Name you created in Step 2 and uploaded the lambda code file s) and the environment variable REGION to the AWS Region where you intend to deploy the solution Save the deployer sensordatacollectionsh file 6 Once the ec2 instance you just launched is the instance state running using SSH log into the ec2 Linux box Create a folder called samdeploy under /home/ec2 user/ Upload the following files into the folder /home/ec2 user/ samdeploy 51 SAM ForSesorDataCollectionyaml 52 deploy ersensordatacollectionsh 7 On the ec2 instance change dir ectory to /home/ec2 user/ samdeploy N ext you will run two ClouFormation CLI commands called package and deploy Both the steps are in a single script file deployer sensordatacollectionsh Review the script file You can now execute the package and deploy the SAM template b y runnin g the following command at the command prompt: $ sh /deployer sensordatacollectionsh View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack named SensorDataCollectionStack from the list of stacks choose the Events tab and refresh the page to see the progress of resource creation ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 31 The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are successfully created Test the Pipeline The SensorDataCollectionStack includes an IoT Device Simulator Lambda Function called PublishToIoT The lambda function is triggered by AWS CloudWatch event rule The event rule invokes the lambda function on a schedule of every 5 minutes The Lambda function generates simulated sensor device messages matching the pattern discussed earlier and publishes it to the MQTT topic The function takes a JSON string as input called the SimulatorConfig to set the number of messages to gener ate per invocation In our example we have set 10 messages per invocation of the lambda function The input parameter to the lambda function is set to the JSON string {"NumberOfMsgs": "10"} The solution w ill start immediately after the stack has deployed successfully You observe the followings: 1 The CloudWatch Event / Rule triggers every 5 min to invoke the Device Simulator lambda function The lambda function is configured to generate by default 10 sensor data messages per invocati on and publish these to the IoT Topic – “LifeSupportDevice/Sensor” 2 The processed data (without the PHI / PII) will appear in the S3 Processed Data Bucket 3 In the DynamoDB console you will see the cross reference data composed o f the PatientID PatientName and PatientDOB in the Table – PatientReferenceTable To stop the Testing of the pattern simply go to the CloudWatch console and disable the Events/Rule called SensorDataCollectionStack IoTDeviceSimmulatorFunct XXXXXXX NOTE: At the time of writing this whitepaper the team at AWS Solution Group has created a robust IoT Device Simulator To help customers more easily test device integration and IoT backend services This solution provides a web based graphical user interface (GU I) console that enables customers to create and simulate hundreds of virtual connected devices without having to configure and manage physical devices or develop time consuming scripts More details can be found at https://awsamazoncom/answers/iot/iot device simulator/ However our simple pattern you will use the IoT Device Simulator Lambda Function that is invoke by the CloudWatch Event/Rule By default the Rule is scheduled to t rigger every 5 minute ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 32 Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these resources 1 On the CloudWatch Console in the AWS Region where you deployed the pattern under Events / Rules disable the Rule – SensorDataCollectionStack IoTDeviceSimmulatorFunct XXXXXXX 1 On the S3 console choose the output S3 Processed Data B ucket and choose Empty Bucket 2 On the CloudFormation console choose the SensorDataCollectionStack stack and choose Delete Stack 3 Finally on the EC2 console terminate the ec2 Linux instance you created to run the CloudFormation template to deploy the solution Appendix B2 – Streaming Ingest Transform Load (ITL) In this section we’ll describe how you can deploy the pattern in your AWS Account and test the transformation function and monitor the performance of the pipeline Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘stream ing_ingest_transform_load template ’ This template creates the following resources in your AWS Account: An S3 Bucket that is used to store the transformed records and the source records from Kinesis Firehose A Firehose Delivery Stream and associated IAM Role used to ingest records An AW S Lambda Function that performs the transformation and enrichment described above A DynamoDB table that stores device details that are looked up by the transformation function ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 33 An AWS Lambda Function that inserts sample device detail records into the Dyna moDB table This function is invoked once as a custom CloudFormation resource to populate the table when the stack is created A CloudWatch Dashboard that makes it easy to monitor the processing pipeline Package and Deploy In this step you’ll use the Cl oudFormation package command to upload local artifacts to the artifacts S3 bucket you chose or created in the previous step This command also returns a copy of the SAM template after replacing references to local artifacts with the S3 location where the p ackage command uploaded your artifacts After this you will use the CloudFormation deploy command to create the stack and associated resources Both steps above are included in a single script deployersh in the github repository Before executing this sc ript you need to set the artifact S3 bucket name and region in the script Edit the script in any text editor and replace PLACE_HOLDER with the name of the S3 bucket and region from the previous section Save the file You can package and deploy the SAM template by running the following command: $ sh /deployersh View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack n amed Stream ingITL from the list of stacks choose the Events tab and refresh the page to see the progress of resource creation The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are suc cessfully created Test the Pipeline Follow the steps below to test the pipeline: ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 34 1 Log into the CloudFormation console and locate the stack for the Kinesis Data Generator Cognito User you created in Create an Amazon Cognito User for Kinesis Data Generator above 2 Choose the Outputs tab and click on value for the key KinesisDataGeneratorUrl 3 Log in with the username and password you used when creating the Cognito User CloudFormation stack earlier 4 From the Kinesis Data Generator cho ose the Region where you created the serverless application resources choose the IngestStream delivery stream from the drop down 5 Set the Records per second as 100 to test the first traffic scenario 6 Set the Record template as the following to generate t est data: { "timestamp" : {{datenow("x")}} "device_id" : "device{{helpersreplaceSymbolWithNumber("####")}}" "patient_id" : "patient{{helpersreplaceSymbolWithNumber("####")}}" "temperature" : {{randomnumber({"min":96"max":104})}} "pulse" : {{rando mnumber({"min":60"max":120})}} "oxygen_percent" : {{randomnumber(100)}} "systolic" : {{randomnumber({"min":40"max":120})}} "diastolic" : {{randomnumber({"min":40"max":120})}} "text" : "{{loremsentence(1 40)}}" } We are using a text field in the template to ensure that our test records are approximately 1KB in size as required by the scenarios 7 Choose Send Data to send generated data at the chosen rate to the Kinesis Firehose Stream ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 35 Monitor the Pipeline Follow the steps below to monitor the pe rformance of the pipeline and verify the resulting objects in S3: 1 Switch to the CloudWatch Console and choose Dashboards from the menu on the left 2 Choose the Dashboard named StreamingITL 3 View the metrics for Lambda Kinesis Firehose and DynamoDB on the dashboard Choose the duration to zoom into a period of interest Figure 7 CloudWatch Dashboard for Streaming ITL 4 After around 5 8 minutes you will see transformed records arrive in the output S3 bucket under the prefix transformed/ 5 Download a sa mple object from S3 to verify its contents Note that objects are stored GZIP compressed to reduce space and data transfers 6 Verify that the transformed records contain a human readable time stamp string device model and manufacturer These are enriched f ields looked up from the DynamoDB table ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 36 7 Verify that a copy of the untransformed source records is also delivered to the same bucket under the prefix source_records/ Once you have verified the pipeline is working correctly for the first traffic scenario you can now increase the rate of messages to 1000 requests / second and then to 5000 requests / second Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these reso urces 4 Stop sending data from the Kinesis Data Generator 5 On the S3 console choose the output S3 bucket and choose Empty Bucket 6 On the CloudFormation console choose the StreamingITL stack and choose Delete Stack Appendix B3 – RealTime Analytics In this section describes how you can deploy the use case into your AWS Account and then run and test Review SAM Template Review the Serverless Application Model (SAM) template in the file ‘ SAM For RealTimeAnalyticsyaml ’ by opening the file in a text edit or of your choice You can use Notepad++ which renders the JSON file nicely This template creates the following resources in your AWS Account: An S3 Bucket (S3ProcessedDataOutputBucket ) that is used to store the Real Time Analytics records containing the anomaly score A Firehose Delivery Stream and the associated IAM Role used as an input stream to the Kinesis Analytic service A Kinesis Analytics Application named DeviceDataAnalytics with one input stream (Firehose Delivery Stream) Application Code (SQL Statements) a Destination ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 37 Connection (Kinesis Analytics Application Output) as a Lambda Function and a second Destination Connection (Kinesis Analytics Output) as Kinesis Firehose Delivery Stream A SNS Topic named publishtomanufacturer and a n email subs cription the SNS Topic You configure the e mail in the deployment script deployer realtimeanalyticssh The variable to set your e mail is named NotificationEmailAddress in the deployment script An AWS Lambda Function that interrogates the data record se t received from the Analytics Stream picking up and publishing the record to a SNS Topic where the anomaly score is higher than a threshold defined (in this case in the Lambda function environment variable) A second AWS Lambda Function named KinesisAnaly ticsHelper that is used to start the Kinesis Analytics Application DeviceDataAnalytics immediately after the Kinesis Analytics Application is created A Kinesis Firehose Delivery Stream that aggregates that records from the Analytics Destination Stream buffers the record and zip s and put the zipped file into the S3 bucket ( S3ProcessedDataOutputBucket ) Package and Deploy Follow the following steps to package and deploy the Real Time Analytics scenario: 1 Clone and download the files from the GitHub folder here to a folder on your local machine On your local machine make sure you have the following files: 11 KinesisAnalyticsOuputToSNSzip 12 SAM ForRealTimeAnalyticsyaml 13 depl oyer realtimeanalytics sh 2 Create an S3 Deployment Bucket in the AWS Region where you intend to deploy the solution Note down the S3 bucket name You will need the S3 bucket name later 3 From your local machine upload the following lambda code zip files i nto the S3 Deployment Bucket you just created in Step 2: 31 KinesisAnalyticsOuputToSNSzip 4 In the AWS Management console launch an ec2 Linux instance that will be used to run the CloudFormation template Launch an ec2 instance of type Amazon Linux AMI 2018 030 (HVM) SSD Volume Type (t2macro) ec2 in the AWS Region where you want to deploy the solution Make sure you enable SSH access to the instance For ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 38 details on how to launch an ec2 instance and enable SSH access see https://awsamazoncom/ec2/getting started/ 5 On your local machine open the deployer realtimeanalyticssh file in a text editor and update the five variables indicated as PLACE_HOLDER S3ProcessedDataOutputBucket (the S3 bucket Name where the Processed Output Data will be stored) NotificationEmailAddress (the e mail address you specify to receive notification that the anomaly score has exceeded a threshold value) AnomalyThresholdScore (the threshold value that the Lambda funct ion will use to identify the records to send for notification ) LamdaCodeUriBucket (the S3 Bucket Name you created in Step 2 and uploaded the lambda code files) and the variable REGION to the AWS Region where you intend to deploy the solution Save the dep loyer sensordatacollectionsh file 6 Once the ec2 instance you just launched is in the instance state running using SSH log into the ec2 Linux box Create a folder called samdeploy under /home/ec2 user/ Upload the following files into the folder /home/e c2user/ samdeploy 61 SAM ForRealTimeAnalyticsyaml 62 deployer realtimeanalyticssh 7 On the ec2 instance change directory to /home/ec2 user/ samdeploy Next you will run two ClouFormation CLI commands called package and deploy Both the steps are in a single script file deployer realtimeanalyticssh Review the script file You can now execute the package and deploy the SAM template by running the following command at the command prompt: $ sudo yum install dos2unix $ dos2unix deployer realtim eanalyticssh $ sh / deployer realtimeanalyticssh 8 As part of the deployment of the pattern an e mail (the e mail specified in step 5) subscription is setup to the SNS Topic Check your e mail in box for an e mail requesting subscription confirmation Open the e mail and confirm the subscription verification Subsequently you will be receiving e mail notifications for the device data records that has exceeded the specified threshold View Stack Details You can view the progress of the stack creation by logging into the CloudFormation console Ensure you choose the AWS Region where you deployed the stack Locate the Stack named DeviceDataRealTimeAnalyticsStack from the list of stacks choose the Events tab and refresh the page to see the progress of re source creation The stack creation takes approximately 3 5 minutes The stack’s state will change to CREATE_COMPLETE once all resources are successfully created ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 39 Test the Pipeline To test this pattern you will use the Kinesis Data Generator (KDG) to Too l to generate and publish test data Refer to section Create an Amazon Cognito User for Kinesis Data Generator section of the whitepaper Using the username and password that you generated during t he configuration log into the KD G tool Provide the followi ng information: 1 Region : Select the Region where you have installed the DeviceDataRealTimeAnalyticsStack 2 Stream / Delivery Stream : Select the delivery stream called DeviceData Input DeliveryStream 3 Records per Second: Enter the record generation / submission rate for simulating the hospital device data 4 Record template: KDG uses a record template to generate random data for each of the record fields We will be using the following JSON template to generate the records that will be submitted to the Kinesis Del ivery Stream DeviceData Input DeliveryStream { "timestamp" : "{{datenow("x")}}" "device_id" : "device{{helpersreplaceSymbolWithNumber("####")}}" "patient_id" : "patient{{helpersreplaceSymbolWithNumber("####")}}" "temperature" : "{{randomn umber({"min":96"max":104})}}" "pulse" : "{{randomnumber({"min":60"max":120})}}" "oxygen_percent" : "{{randomnumber(100)}}" "systolic" : "{{randomnumber({"min":40"max":120})}}" "diastolic" : "{{randomnumber({"min":40"max":120})}}" "manufacturer" : "Manufacturer {{helpersreplaceSymbolWithNumber("#")}}" "model" : "Model {{helpersreplaceSymbolWithNumber("##")}}" } ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 40 To run the RealTime Analytics application click on the Send Data button located towards the bottom of the KDG Tool As the KDG begins to pump device data records to the Kinesis Delivery Stream the records are streamed into the Kinesis Analytics Application The Application code analyzes the streaming data and applies the algorithm to generate the anomaly score for each o f the rows You can view the data stream in the Kinesis Analytics console The diagram below the sampling of the data stream ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 41 The Kinesis Analytics Application is configured with two Destination Connections The first destination connector (or output) is a Lambda function The lambda function iterates through a batch of records delivered by the Application DESTINATION_SQL_STREAM_001 and interrogates the anomaly score field for the record If the anomaly score exceeds the threshold defined in the lambda fu nction environment variable ANOMALY_THRESHOLD_SCORE the lambda function publishes the record to a Simple No tification Service (SNS) Topic named publishtomanufacturer The second Destination Connection is configure d to a Kinesis Firehose Delivery Stream – DeviceDataOutputDeliveryStream The delivery stream buffers the records and zips the buffered records to a zip file before putting into the S3 bucket S3ProcessedDataOutputBucket Observe the followings: 1 In your e mail (that you specified in the deployme nt script) inbox the first e mail you will receive device data records for which the anomaly score has exceeded the specified threshold 2 In the AWS Kinesis Data Analytics console select the Application named DeviceDataAnalytics click the Application detail button towards the bottom this will take you to the DeviceDataAnalytics application detail page Towards the middle of the page under Real Time Analytics click the button “Go to the SQL Results” On the realTime Analytics page observe the Source Data R aelTime Analytics Data and the Destination Data using the tabs 3 Records with the anomaly score are stored in the S3 Processed Data Bucket Review the records that includes the anomaly score ArchivedAmazon Web Services – Serverless Streaming Architectures and Best Practices Page 42 To stop the Testing of the pattern simply go to the browser where you are running the KDG Tool and click the “Stop Sending Data to Kinesis” button Cleaning up Resources Once you have tested this pattern you can delete and clean up the resources created so that you are not charged for these resources 1 Go back to the browser where you launched the KDG Tool and click the stop button The tool will stop sending any addition data to the input kinesis stream 7 On the S3 console choose the output S3 Processed Data Bucket and choose Empty Bucket 8 On the Kinesis consol e stop the Kinesis Data Application DeviceDataAnalytics 9 On the CloudFormation console choose the SensorDataCollectionStack stack and click Delete Stack 10 Finally on the EC2 console terminate the ec2 Linux instance you created to run the CloudFormation t emplate to deploy the solution
|
General
|
consultant
|
Best Practices
|
Setting_Up_Multiuser_Environments_for_Classroom_Training_and_Research
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlSetting Up MultiUser Environments in AWS For Classroom Training and Research First Published October 2013 Updated September 15 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlContents Introduction 1 Scenario 1: Individual server environments 2 Scenario 2: Limited user access to the AWS Management Console within a single account 2 Scenario 3: Separate AWS accounts for each user 3 Comparing the scenarios 4 Setting up Scenario 1: Individual server environments 5 Account setup 6 Cost tracking 6 Monitoring resources 7 Reporting 7 Runtime environment 7 Clean up the environment 7 Setting up scenario 2: Limited user access to AWS Management Console within a single account 8 Account setup 10 Cost tracking 11 Monitoring resources 11 Reporting 12 Runtime environment 12 Clean up the environment 12 Setting up Scenario 3: Separate AWS account for each user 13 Account setup 14 Cost tracking 17 Monitoring resources 17 Reporting 17 Runtime environment 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlClean up the e nvironment 18 Keeping accounts alive 18 Conclusion 18 Contributors 20 Further reading 20 Appendix A: Adding IAM user policies 21 Appendix B: Example IAM user policies 24 Example policies for professor (administrator) 24 Example Policies for Students 25 Document versions 28 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAbstract Amazon Web Services (AWS) can provide the ideal environment for classroom training and research Educators can use AWS for student labs training applications individual IT environments and cloud computing courses This whitepaper provides an overview of how to create and manage multi user environments in the AWS Cloud so professors and researchers can leverage cloud computing in their projects This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 1 Introduction With AWS you ca n requisition compute storage and other services on demand gaining access to a suite of secure scalable and flexible IT infrastructure services as your organization needs them This enables educators academic researchers and students to tap into the ondemand infrastructure of AWS to teach advanced courses tackle research endeavors and explore new projects – tasks that previously would have required expensive upfront and ongoing investments in infrastructure For more information see Cloud Computing for Education and Cloud Products To access any AWS service you need an AWS account Each AWS account is typically associated with a payme nt instrument (credit card or invoicing) You can create an AWS account for any entity such as a professor student class department or institution When you create an AWS account you can sign into the AWS Management Console and access a variety of AW S services In addition to creating an AWS account with a user name and password you can also create a set of access keys that you can use to access services via APIs or command line tools Protect these security credentials and do not share them publicl y For more information see AWS security credentials and AWS Management Console If you require more than one person to access your AWS account AWS Identity and Access Management (IAM) enables you to create multiple users and manage the permissions for each of these users within your AWS account A user is a unique i dentity recognized by AWS services and applications Similar to a user login in an operating system like Windows or Linux user s each have a unique name and can identify themselves using various kinds of security credentials such as user name and password or an access key ID and accompanying secret access key A user can be an individual such as a student or teaching assistant or an application such as a research application that requires access to AWS services You can create users group s roles and federation capabilities using the AWS Management Console APIs or a variety of partner products For instructions on how to create new users and manage AWS credentials see Creating an IAM user in your AWS account in the AWS Identity and Access Management documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 2 Depending on your teaching or research needs there are several ways to set up a multi user environment in the AWS Cloud The following sections introduce three possible scenarios Scenario 1: Individual server environments The “Individual Server Environments” scenario is excellent for labs and other class work that requires users to access their own pre provisioned Linux or Windows servers running in the AWS Cloud The servers are in Amazon Elastic Compute Cloud (Amazon EC2) instances The instances can be created by an administrator with a customized configuration that includes applications and the data needed to perform tasks for labs or assignments This scenario is easy to set up and manage It does not require users to have their own AWS accounts or credentials for more than their individual servers Users do not have access to allocate additional resources on the AWS Cloud Example Consider a class with 25 students The administrator creates 25 private keys and launches 25 Amazon EC2 instances; one instance for each student The administrator shares the appropriate key or password with each student and provides instructions on how to log in to their instance In this case students do not have ac cess to the AWS Management Console APIs or any other AWS service Each student gets a unique private key (Linux) or a user name and password (Windows) along with the public hostname or IP address of the instance that they can use to log in Scenario 2: Limited user access to the AWS Management Console within a single account This scenario is excellent for users that require control of AWS resources such as students in cloud computing or high performance computing (HPC) classes With this scenario users are given restricted access to the AWS services through their IAM credentials Example Consider a class with 25 students The administrator creates 25 IAM users using the AWS Management Console or APIs and provides each student with their IAM This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 3 credentials (user name and password) and a login URL for the AWS Management Console The administrator also creates a permission s policy that can be attached to a user group or an individual user to allow or deny access to different services Each student (IAM user) has access to resources and services as defined by the access control policies set by the administrator Students can log in to the AWS Management Console to access different AWS services as defined the policy For example they could launch Amazon EC2 Instances and store objects in Amazon Simple Storage Service (Amazon S3) Scenario 3: Separate AWS accounts for each user This scenario with optional consolidated billing provides an excellent environment for users who need a completely separate account environment such as researchers or graduate students It is similar to Scenario 2 except that each IAM user is created in a separate AWS account eliminating the risk of users affecting each other’s services Example Consider a research lab with ten graduate students The administrator creates one management AWS account which will own the AWS Organization Then t he administrator provisions separate AWS accounts for each student within the AWS Organization For each account the administrator creates an IAM user in each of the accounts or manage s the permissions through single signon users for each student and applies access control policies Users receive access to an IAM user/role within their AWS account Users can log in to the AWS Management Console to launch and access different AWS services subject to the access control policy applied to their account Students don’t see resources provisioned by other students because each account is isolated from each other A key advantage of this scenario is that students can keep their account s after the completion of the course Each account can be set up as a standalone account outside the AWS Organization If the students have used AWS resources as part of a startup course they can continue to use what they have built on AWS after the class semester or course is over This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 4 Comparing the scenarios The scenario you should select depends on your requirements Table 1 provides a comparison of key features of these three scenarios Table 1: Comparison of scenarios Individual server environments Limited user access to AWS Management Console Separate AWS account for each user Examples Undergraduate labs Graduate classes Graduate research labs Example uses Labs or course work requiring a virtual server AWS service or separate application instance Courses in cloud computing or labs requiring variable resource needs (such as HPC) Courses for startups thesis or research projects Separate AWS accounts required for each user No No Yes Major steps for setup Create and allocate Amazon EC2 resources and associated credentials Create IAM users create policies and distribute credentials Create separate member AWS accounts plus the steps in the Setting up Scenario 2: Limited user access to AWS Management Console section Users can provision additional AWS resources resulting in additional charges No Yes depending on IAM services provided to users Yes depending on IAM services provided to users Users have access to AWS Management Console or APIs No Yes Yes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 5 Individual server environments Limited user access to AWS Management Console Separate AWS account for each user User charges paid by the management AWS account Yes Yes Yes if consolidated billing is used Separation between user environments Yes based on resource access configuration Yes if optional resource based permissions are configured Yes Individual user credit cards or invoicing required No No No if consolidated billing is used Billing alerts can be used to monitor charges Yes Yes Yes A large number of real world use cases can benefit from implementing these scenarios This section focus es on the education sector where multi user shared environments are required for setting up online classes labs and workshops for students Both user and resource management are critical in these scenarios Depending on your specific requirements any of these scenarios can be used for setting up classrooms in the AWS Cloud The following sections describe each of these scenarios in more detail Setting up Scenario 1: Individual server environments With this scenario users are provided access credentials to AWS resources Users cannot access the AWS Management Console or launch new services They receive the credentials to access specific AWS services that have already been launched by an administrator This scenario is a good match for simpler use cases in which users do not need to launch new AWS services The following figure shows the architecture for this scenario This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 6 Individual server environments An administrator can give users their own unique access keys (SSH keys for Linux and password for Windows) for security and separation between users For labs that do not require security among users ( such as collaborative labs) the administra tor can keep the keys or access credentials common for all the servers and provide the unique access public DNS names of instances to the users The administrator can choose the level of security and management appropriate for their needs Account setup The administrator creates an AWS account for the user group For example this can be a shared account for a professor class department or school The administrator can also use an existing AWS account New AWS account signup and access to existing AWS a ccounts is available on your Account page The administrator launches the required AWS services for each user and provides resource access credentials to the users Cost tracking If needed the administrator tags the resources launched for different users Cost allocation and resource tagging can help track usage by different users This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 7 For more information see Using Cost Allocation Tags in the AWS Billing and Cost Management documentation Monitoring resources The administrator can set up AWS Budgets to monitor AWS resources Creating billing alerts that automatically notify the designated recipient whenever the estimated AWS charges reach a specified threshold The administrator can choose to receive an alert on the total AWS charges or charges for a specific AWS product or service If the account has any limits the administrator can use these as the threshold for receiving billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets Reporting Detailed usage reports a re available for the administrator from the AWS Management Console Reports are available for monthly charges and also for account activity in hourly increments For more information see Detailed Billing Reports in the AWS Billing and Cost Management documentation Runtime environment After the administrator provisions the account and launches the required AWS services users can access their AWS resources using the provided credentials For example if Amazon EC2 instances are part of the class users would b e given keys or passwords to SSH (in Linux instances ) or RDP (to Windows instances ) Users would not have the credentials to log in to the AWS Management Console or to launch any new services Clean up the e nvironment When users have finished their work or when the account limits are reached the administrator can end the AWS resources Because student users do not have their own AWS accounts ending the launched services ensures that user work is deleted and further charges are discontinued This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 8 Setting up scenario 2: Limited user access to AWS Management Console within a single account For this scenario the administrator creates IAM users and give s each one access credentials With IAM an administrator can securely control access to AWS services and resources The a dministrator can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources Users can log in to the AWS Management Console and launch and access different AWS services subject to the access control policies applied to their account Users have dire ct control over the access credentials for their resources By default when you create IAM users they don’t have access to any AWS resources You must explicitly grant them permissions to access the resources that they need for their work Permissions are rights that you grant to a user or group to let the user perform tasks in AWS Permissions are attached to an IAM principal or an AWS Single SignOn (SSO) permission set and let the ad ministrator specify what that user can do Depending on the context administrators may be able to construct resource level permissions for users that control the actions the user is allowed to take for specific resources (for example limiting which instance the user is allowed to end) For an overview of IAM permissions see Controlling access to AWS resources using policies in the AWS Identity and Access Management documentation and read Resource Level Permissions for EC2 –Controlling Management Access on Specific Instances on the AWS Se curity Blog To define permissions administrators use policies which are documents in JSON format A policy consists of one or more statements each of which describes one set of permissions Policies can be attached to IAM users groups or roles AWS Policy Generator is a handy tool that lets administrators create policies easily For e xample policies that are relevant to multi user environments see Appendix B For more information about policies see Policies and permissions in IAM A useful option in this scenario is for the administrator to tag resources and write appropriate resource level permissions to limit IAM users to specific actions and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 9 resources A tag is a label you assign to an AWS resource For services that support tagging apply tags using the AWS Management Console or API requests This enables fine grained control on which resources a user can access and what actions they can take on those resources The administrator will also need to write policies to prevent users from manipulating the resource tags For example for Amazon EC2 tags the administrator should disable the ec2:CreateTags and ec2:DeleteTags actions This scenario is also good for use cases that require collaboration among u sers As described previously a user can give other IAM users access to specific actions on their resources using a mix of user level and resource level permissions A good example is a collaborative research project where students allow other members of their team access to software in their Amazon EC2 instances and data stored in their Amazon S3 buckets This scenario can be useful when the users need to access the AWS Management Console launch new services interact with services for complicated cloud based application architectures or exercise more control over accessing and sharing resources The following figure shows the architecture for this scenario Limited user access to AWS Management Console As shown in the preceding figure this scenario works well with a single AWS account The administrator needs to create IAM users and groups to apply access control policies This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 10 for the environment Example IAM user policies for setting up this scenario are in Appendix B Account setup The administrator creates one AWS account for the group For example this can be a shared account for a professor class department or school An existing AWS account can also be used New AWS account signup and access to existing AWS accounts is available on the Account page The administrator then creates an IAM user for each user with the AWS Management Console or the API These IAM users can belong to one or more IAM groups within a single AWS account Alternatively the administrator can deploy SSO and create an SSO User for each student teaching assistant or professor which allows users to log in into the account through federation Each SSO user can have one or more permission sets assigned to them depending on the role they need to assume to log into the account Based on environment requirements the administrator attaches custom policies to IAM users or IAM groups to restrict certain AWS resources that ca n be launched and used Thus users can only launch AWS services for which permissions have been granted Users are provided credentials for their IAM user which can be used to log in to the AWS Management Console access AWS services and call APIs Information required for account setup To create an account and set up IAMbased access control an administrator need s the following information: • An AWS account for the group This account could belong to the school department or professor If no account exists a new account must be created • Name and email address of users • Required AWS resources and services and the operations permitted on them This is required to determine the access control policies to be applied to each IAM user • Contact information for the billing reports and alerts • Contact information for the usage reports and alerts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 11 Providing access to users With SSO the administrator can use the example IAM policies from Appendix B to create custom permission sets to assign to each group of users using the IAM User policies Next the administrator needs to create an AWS SSO user for each of the students and assign the us er to the relevant permission set Students then can log in using the AWS SSO Sign in URL See this Basic AWS SSO Configuration video tutorial For basic instructions on how to add IAM user polici es see Appendix A For e xample IAM user policies for setting up this scenario see Appendix B If the administrator decides not to use SSO adds IAM users with roles and custom policies to the AWS account directly to implement required access control logic for the different kinds of users in the group The administrator then provides IAM user login information to the corresponding members of the group Cost tracking All users can tag their resources for services with tagging capability With the cost allocation feature of AWS Account Billing the administrator can track AWS costs for each user For more information see Using Cost Allocation Tags in the AWS Account Billing documentation Monitoring resources AWS Budgets can help moni tor AWS resources Billing alerts automatically notify users whenever the estimated charges on their current AWS bill reach a threshold they define Users can choose to receive an alert on their total AWS charges or charges for a specific AWS product or service If the account has any limits the administrator can use these as the threshold for sending billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 12 Reporting Detailed usage reports are available for the administrator from the AWS Management Console Reports are available for monthly charges and also for account activity in hourly increments For more information see Detailed Billing Reports Runtime environment Users can l og into the AWS Management Console (as an IAM user or with an AWS SSO user) with the login information provided to them by the administrator They can launch and use resources defined by the rules and policies set by the administrator For example if they have the appropriate permissions they can launch new Amazon EC2 instances or create new Amazon S3 buckets upload data to them and share them with others An IAM user might be granted access to create a resource but the user's permissions even for that resource are limite d to what's been explicitly granted by the administrator The administrator can also revoke the user's permission at any time Setting proper resource and user based permissions helps prevent an IAM user from taking actions on resources belonging to other IAM users in the AWS account For example an IAM user can be prevented from terminating instances belonging to other IAM users in the AWS account For more information see Controlling access to AWS resources using policies Clean up the e nvironment When users have finished their work or when the account limits are reached they (or the administrator) can end the AWS resources Administrators can also delete the IAM users If an instance of SSO was created for the users to log in the directory should be disabled The users will lose their work unless they take steps to save it (a procedure that is beyond the scope of this whitepaper) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 13 Setting up Scenario 3: Separate AWS account for each user In this scenario an administrator creates separate AWS accounts for each user who needs a new AWS account These accounts can optionally be added together under an AWS Organization and a single AWS account can be designated as the management account using AWS Organizations Once the student accounts become member s of the Organization the management account can becomes the payer account and all the accounts can benefit from consolidated billing which provides a single bill for multiple AWS accounts The administrator then creates an IAM user in each AWS account and applies an access control policy to each user Users a re given access to the IAM user within their AWS account but do not have access to the AWS account root user The administrator should deploy SSO in the management account to create users to grant access to each account through federation centrally This allows the accounts to be managed by an administrator consistent with the required policies for the user environment Users can log into the AWS Management Console with their IAM credentials and then launch and access different AWS services subject to the access control policies applied to their account Since students have access to their individual accounts they have direct control over the access credentials for their resources (creation/deletion of SSH keys) and they can also share these resources with other users and accounts as needed This scenario is good for setting up collaborative multi user work environments To implement it users can create an IAM role which is an entity that includes permissions that isn't associated with a specific user Users from other accounts can then assume the role and access resources according to the permissions assigned to the role For more information see Roles terms and concepts This scenario offers maximum flexibility for users and is helpful when they need to access the AWS Management Console to launch new services TIt also gives users flexibility in working with complicated cloud based application architectures and more control over accessing and sharing their resources Having separate AWS accounts for each user works well for both short term and long term usage For short term usage AWS resources IAM users and even AWS accounts can be terminated after the work is done For long term usage the AWS accounts for This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 14 some or all users are kept alive at the end of the current engagement All work done can be easily preserved for future use Users can also be provided full administrator access to their AWS account (besides the IAM based access they initially had) to continue their work An example scenario is an entrepreneurship class where some students might develop some new solutions or intellectual property using AWS resources that they want to retain for future use or for immediate deployment Their work can be easily turned over to them by giving them full access to their AWS account Another benefit of this scena rio is that in the AWS Management Console users cannot see resources belonging to any other users in the group since each user is working from their own AWS account The following figure shows the architecture for this scenario Separate AWS account for each user Account setup The administrator deploys AWS Organizations on the management accounts then provisions an AWS account for each user in the group Independent AWS accounts (with unique AWS IDs) are created for each user This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 15 The administrat or creates the accounts using the AWS Organizations CreateAccount API which creates an account in which the credentials of the AWS account root user need to be reset In the management account the administrator deploys AWS SSO and creates an AWS SSO user for each of users that requires access to the AWS accounts Based on the environment requirements the AWS SSO users are assigned to their relevant AWS SSO permission sets that are customized with IAM policies allowing each user to only use the services for which permissions have been granted Alternatively the administrator can create an IAM user in each user’s AWS account Based on environment requi rements custom policies are attached to IAM users individually to constrain AWS resources that can be launched and used Users can only launch AWS services for which permissions have been granted Users are provided credentials to log into the AWS Manage ment Console access AWS services and call APIs Users do not have access to the root credentials of the AWS account and cannot change the IAM access policies enforced on the account Using AWS Organizations an administrator can set up consolidated billing for the group Consolidated billing (offered at no additional charge by AWS) enables consolidation of payment for multiple AWS accounts by designating a single payer account the management account within the AWS Organization Consolidated billing provides a combined view of AWS charges incurred by all accounts as well as a detailed cost report for each individual member AWS account associated with the management account For detailed information about how to set up consolidated billing see Consolidated billing for AWS Organizations Another benefit from this scenario is that the administrator can set control s across every account using service control policies (SCPs) to restrict access to specific resources and services independently of the user’s permiss ions Information required for account setup The following information is required for creating accounts and setting up IAMbased access control: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 16 • AWS management account for the group This account could belong to the school department or professor If no account exists a new account is created This account is necessary for setting up AWS Organizations and consolidated billing • Name and email addresses of users • AWS account credentials for users who have existing AWS accounts that they want to use in this environment These accounts will join the AWS Organization users who do not have an AWS account or do not want to use their existing account will need new acco unts provisioned for them • Required AWS resources and services and the operations permitted on them This is required to determine the access control policies to be applied to each IAM user • Contact information for the billing reports and alerts • Contact information for the usage reports and alerts Providing access to users Using SSO the administrator creates different permissions sets which are custom IAM policies that can be used to grant resource access to the accounts within the AWS Organization to a specific user Then the administrator creates a n associated SSO user assigns it to the user’s account and attach es a permission set to th at user based on the level of privileges the user needs to have In this case there could be a permission set for an administrator a permission set for the teaching assistant and a permission set for the students Changes to the permissions will apply immediately to all the users using the same permission set in their account Finally the administrator can generate login credentials for each user so they can access the accounts each user has access to through the SSO portal For more information on how to manage access to your accounts and assign policies to your per mission sets see Manage SSO to your AWS Accounts See this SSO Configuration tutorial video to understand AWS SSO better Alternatively the administrator can add IAM users with roles and custom policies for each user in each AWS account to implement required access control logic for the different types of users in the group Login information for t he IAM users is provided to the corresponding users in the group This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 17 For basic instructions on how to add IAM user policies see Appendix A See Appendix B for an example of setting up IAM user policies for this scenario Cost tracking Consolidated billing makes it easy to track AWS costs because it shows the administrator a combined view of charges incurred b y all AWS accounts as well as a detailed cost report for each individual AWS account within the organization Consolidated billing is included with AWS Organizations where the management account pays the charges of all member accounts All users can also tag their resources for services with tagging capability An administrator can then use the cost allocation feature of AWS Account Billing to track AWS costs for each user For more information see Using Cost Allocation Tags and Viewing your bill in the AWS Billing and Cost Management documentation Monitoring resources AWS Budgets alerts can help monitor AWS resources Billing alerts automatically notify users whenever the estimated charges on their current AWS bill reach a threshold they define Users can choose to receive an alert on their total AWS charges or charges for a specific AWS product or service I f the account has any limits the administrator can use these as the threshold for sending billing alerts For more information about setting up billing alerts with AWS Budgets see Best practices for controlling access to AWS Budgets Reporting Detailed usage reports are available for administrator s from the AWS Management Console Reports are available for monthly ch arges as well as for account activity in hourly increments For more information see Detailed Billing Reports Runtime environment Users can log into the AWS Manage ment Console as an IAM user with the login information provided to them by the administrator They can launch and use resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Envir onments in AWS 18 defined by the rules and policies set by the administrator For example if they have the appropriate permissions users can lau nch new Amazon EC2 instances or create new Amazon S3 buckets upload data to them and share them with others Because accounts are independent each user sees only their own AWS resources in the AWS Management Console Clean up the environment When the users have finished their work or when the account limits are reached they (or the administrator) can optionally terminate the AWS services The administrator can also delete the IAM users or the SSO users and revoke the access to the account When the account is no longer in use it can be closed ending all the resources within the account Keeping accounts alive If the users want to retain their AWS accounts they can request the root account credentials from their administrator The administ rator would remove their account from the organization and users would need to provide their own billing information The users will get login and security credentials to their AWS account Conclusion Multi user shared environments with custom access cont rol policies are a common use case for AWS customers Typical requirements include both user and resource management to allow controlled access to AWS resources for multiple users This whitepaper presented three scenarios that covered a wide array of use cases with these requirements • The “Individual Server Environments” scenario provides access to customized work environments on AWS and is suitable for use cases like undergraduate labs • The “Limited User Access to AWS Management Console” scenario prov ides IAM user access to users from a single AWS account suitable for use cases like graduate classes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 19 • The “Separate AWS Account for Each User” scenario provides independent AWS accounts for each user (with consolidate billing) which is suitable for gradua te research and entrepreneurship courses In this whitepaper we focused on the short to medium term education and research environments as the example domain but the same or similar scenarios may also be implemented for other use cases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 20 Contributors Contributors to this document include : • KD Singh Amazon Web Services • Leo Zhadanovsky Chief Technologist Education Amazon Web Services • Alex Torres Solutions Developer Amazon Web Services Further reading For additional information see: • IAM documentation • IAM policies for Amazon EC2 • Granting IAM users required permissions for Amazon EC2 resources • Amazon Resource Names (ARNs) • Organizing Your AWS Environment Using Multiple Accounts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 21 Appendix A: Adding IAM user policies This section describes how to add IAM user policies to an AWS account For more information see Creating an IAM user in your A WS account 1 In the AWS Management Console choose Services > IAM 2 Choose Users 3 Choose Add Users 4 Enter name of the IAM user to be created 5 Choose Next This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 22 6 Choose Next to attach a user policy 7 If none of these policies work for your use case you can Create a policy and attach it You can create the policy using the Interface or you can create a custom JSON policy with one of the examples from Appendix B This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 23 8 Paste the proper policy from Appendix B 9 Choose Apply Policy 10 On the previous screen refresh the policy list and attach the policy you just created 11 Choose Download Credentials Save the downloaded file in a secure location as these are the user’s access key ID and secret access key They will need these to use the AWS API This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 24 Appendix B: Example IAM user policies This section provides example IAM user policies for a class that us es AWS services including policies for the professor teaching assistant and students These policies are useful for setting up the “Limited User Access to AWS Management Console” and “Separate AWS Account for Each User” scenarios described earlier in th is whitepaper For more information about policies see Policies and permissions in IAM Example policies for professor ( administrator) • Full administrator access: { "Statement": [ { "Effect": "Allow" "Action": "*" "Resource": "*" }] } • Billing access: { "Statement": [ { "Effect": "Allow" "Action": [ "awsportal:ViewBilling" ] "Resource": "*" }] } • Usage acce ss (Example Policies for Teaching Assistant ): { "Statement": [ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 25 { "Effect": "Allow" "Action": [ "awsportal:ViewUsage" ] "Resource": "*" }] } • Full administrator access but no access for billing or usage information: { "Statement":[{ "Effect":"Allow" "Action":"*" "Resource":"*" } { "Effect":"Deny" "Action":"aws portal:*" "Resource":"*" }] } Example Policies for Students • Permission to create and describe Amazon EBS volumes: { "Version": "2012 1017" "Statement": [{ "Effect": "Allow" "Action": [ "ec2:DescribeVolumes" "ec2:DescribeAvailabilityZones" "ec2:CreateVolume" "ec2:DescribeInstances" ] "Resource": "*" } { This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 26 "Effect": "Allow" "Action": [ "ec2:AttachVolume" "ec2:DetachVolume" ] "Resource": "arn:aws:ec2:region:111122223333:instance/*" "Condition": { "StringEquals": { "ec2:ResourceTag/purpose": "test" } } } { "Effect": "Allow" "Action": [ "ec2:AttachVolume" "ec2:DetachVolume" ] "Resource": "arn:aws :ec2:region:111122223333:volume/*" } ] } • Permission to create and modify Amazon EC2 instances: { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Action": [ "ec2:DescribeInstan ces" "ec2:DescribeImages" "ec2:DescribeInstanceTypes" "ec2:DescribeKeyPairs" "ec2:DescribeVpcs" "ec2:DescribeSubnets" "ec2:DescribeSecurityGroups" "ec2:CreateSecurityGroup" "ec2:AuthorizeSecurityGroupIngress" "ec2:CreateKeyPair" ] This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 27 "Resource": "*" } { "Effect": "Allow" "Action": "ec2:RunInstances" "Resource": "*" } ] } • Prevents modifying resource tags: { "Version": "20121017" "Statement": [ { "Action": [ "ec2:CreateTags" "ec2:DeleteTags" ] "Resource": [ "*"] "Effect": "Deny" }] } • For instances with a student tag allows students to restart stop reboot attach volumes and detach volumes If the professor or teaching assistant applies a student tag with the value being the IAM user name of specific students to specific instances then those students can stop reboot attach volumes to and detach volumes to those instances They can also start instances that they stopped (that still have the student tag on them) but they can’t star t new ones { "Version": "20121017" "Statement": [ { "Action": [ "ec2:StartInstances" "ec2:StopInstances" "ec2:RebootInstances" This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ settingupmultiuserenvironments/settingup multiuserenvironmentshtmlAmazon Web Services Setting Up Multi User Environments in AWS 28 "ec2:AttachVolume" "ec2:DetachVolume" ] "Condition": { "StringEquals": { "ec2:ResourceTag/Student":"${aws: username }" } } "Resource": [ "arn:aws:ec2: region:account:instance/* " "arn:aws:ec2: region:account:volume/*" ] "Effect": "Allow" }] } Document versions Date Description September 15 2021 Updated for technical accuracy October 2013 First publication
|
General
|
consultant
|
Best Practices
|
Single_SignOn_Integrating_AWS_OpenLDAP_and_Shibboleth
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Single Sign On: Integrating AWS OpenLDAP and Shibboleth A Step byStep Walkthrough Matthew Berry AWS Identity and Access Management April 2015 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 2 of 33 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 3 of 33 Contents Abstract 3 Introduction 3 Step 1: Prepare the Operating System 5 Step 2: Install and Configure OpenLDAP 8 Step 3: Install Tomcat and Shibboleth IdP 11 Step 4: Configure IAM 15 Step 5: Configure Shibboleth IdP 19 Step 6: Test Shibboleth Federation 30 Conclusion 32 Further Reading 32 Notes 32 Abstract AWS Identity and Access Management (IAM) is a web service from Amazon Web Services (AWS) for managing users and user permissions in AWS Outside the AWS cloud administrators of corporate systems rely on the Lightweight Directory Access Protocol (LDAP)1 to manage identities By using rolebased access control (RBAC) and Security Assertion Markup Language (SAML) 20 corporate IT systems administrators can bridge the IAM and LDAP systems and simplify identity and permissions management across onpremises and cloudbased infrastructures Introduction In November 2013 the IAM team expanded identity federation2 to support SAML 20 Instead of recreating existing user data in AWS so that users in your organization can access AWS you can use AWS support for SAML to federate user identities into AWS For example in many universities professors can help students take advantage of AWS resources via the students' university account s Stepbystep instructions walk you through the use of AWS SAML 20 support with OpenLDAP which is an implementation of LDAP This walkthrough depicts a fictitious university moving to OpenLDAP Because the university makes heavy use of Shibboleth identity provider (IdP) software you will learn how to use Shibboleth as the IdP You will also learn the entire process of setting up LDAP If your organization already has a functional LDAP implementation you can review the schema and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 4 of 33 then skip to the Install Tomcat3 and Install Shibboleth IdP4 sections Likewise if your organization already has Shibboleth in production you can skip to the Configure Shibboleth IdP5 section Assumptions and Prerequisites This walkthrough describes using a Linux Ubuntu operating system and makes the following assumptions about your familiarity with Ubuntu and with services from AWS such as Amazon Elastic Compute Cloud (Amazon EC2): • You know enough about Linux to move between directories use an editor (such as Vim) and run script commands • You have a Secure Shell (SSH) tool such as OpenSSH or PuTTY installed on your computer and you know how to connect to a running Amazon EC2 instance For a list of SSH tools see Connect to Your Linux Instance 6 in the Amazon EC2 documentation • You have a basic understanding of what LDAP is and what an LDAP schema looks like LDAP Schema and Roles A fictitious university called Example University is organized as shown in Figure 1 This university assigns a unique identifier (uid) to each individual more commonly referred to as a user name Each individual is also part of one or more organizational units (OU or OrgUnit) In our fictitious university OUs correspond to departments and one special OU named “People” contains everyone Each individual has a primary OU The primary OU for everyone except managers is the People OU The primary OU for managers is the department they manage Figure 1: Schema for Example University This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 5 of 33 Software For the example use the following software Although Ubuntu 1404 Long Term Support (LTS ) is illustrated the instructions apply to most versions of Ubuntu and Linux (perhaps with minor modifications) In general the procedures work in Microsoft Windows or OS X from Apple but they require alternate installation and configuration guides for OpenLDAP and Java v irtual machine which this walkthrough does not address Function Software and version Operating system Ubuntu 1404 LTS Java virtual machine OpenJDK 7u25 (IcedTea 2310) Web server Apache Tomcat 70 59 Identity provider Shibboleth IdP 24 Directory SLAPD (OpenLDAP 2428) Step 1: Prepare the Operating System These steps begin with an Amazon EC2 instance so that you can see a completely clean installation of all components The demo uses a t2micro instance because it is free tier eligible 7 (it will not cost you anything) and because this example installation does not serve any production traffic You can complete this walkthrough with a t2micro instance and stay in the free tier You can use a larger instance size if you want It makes no difference to the illustrated functionality and larger sizes run faster But note that you will be charged at standard rates if you use instances that are not in the free tier If you are new to Amazon EC2 you might want to read Getting Started with Amazon EC2 Linux Instances8 for context before you begin Launch a New Amazon EC2 Instance 1 Sign in to the AWS Management Console and then go to the Amazon EC2 console 2 Click Launch Instance find Ubuntu Server 1404 LTS (HVM) SSD Volume Type and then click Select 3 Select the t2micro instance which is the default 4 Click through the Next buttons until you get to Step 6: Configure Security Group Note: Restrict the IP address range in this step to match your organization’s IP address prefix or use the My IP option 5 Click Add Rule and then select HTTPS This opens up port 443 for SSL traffic This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 6 of 33 Note: Restrict the IP address range in this step to match your organization’s IP address prefix or use the My IP option 6 When you are finished click Review and Launch and then click Launch 7 When prompted create a new key pair for logging in to the Ubuntu instance Give it a name (for example ShibbolethDemo ) and then download and save the key pair See Figure 2 Then click Launch Instances Figure 2: Select an Existing Key Pair or Create a New Key Pair Important: Be sure to download your key pair Otherwise you will not be able to access your instance For information about how to connect to an Amazon EC2 instance using SSH see Connect to Your Linux Instance9 8 Click View Instances When the instance is running find and copy the following values for the instance which you'll need later: • The instance ID • The public DNS of the instance • The public IP address of the instance You can find all of these values in the Amazon EC2 console when you select your instance as shown in Figure 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 7 of 33 Figure 3: EC2 Instance Details Showing Instance ID Public DNS and IP Address Update L ocal Hosts File In this walkthrough various configuration values reference the DNS examplecom or idpexamplecom Each Amazon EC2 instance has a unique IP address and DNS that are assigned when the instance starts so you must update the hosts file on your local computer so that examplecom and idpexamplecom resolve to the IP address of your Amazon EC2 instance 1 Make sure you know the public IP address of your Amazon EC2 instance as explained in the previous section 2 Open the hosts file on your local computer Editing this file requires administrative privileges These are the usual locations of the hosts file: • Windows: %windir%\ System32\ drivers\ etc\hosts • Linux: /etc/hosts • Mac: /private/etc/hosts 3 Add the following mappings to the hosts file using the public IP address of your own Amazon EC2 instance When you are done save and close the file nnnnnnnnnnn examplecom nnnnnnnnnnn idpexamplecom This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 8 of 33 Create Directories Using your SSH tool (OpenSSH PuTTY etc) connect to your Amazon EC2 instance Create directories for Tomcat Shibboleth and the demo files by running the following commands cd /home/ubuntu/ mkdir –p /home/ubuntu/server/tomcat/conf/Catalina /localhost mkdir p /home/ubuntu/server/tomcat/endorsed mkdir /home/ubuntu/server/shibidp mkdir p /home/ubuntu/installers/shibidp Step 2: Install and Configure OpenLDAP OpenLDAP is an opensource implementation of the Lightweight Directory Access Protocol (LDAP)10 This walkthrough assumes basic knowledge of LDAP and explains only what is required to complete it About LDAP A small set of primitives that can be combined into a complex hierarchy of objects and attributes defines LDAP The core element of the LDAP system is an object which consists of a keyvalue pair Objects can represent anything that needs an identity in the LDAP system such as people printers or buildings Because you can reuse keys sets of key value pairs are grouped into object classes These object classes are included by using special object class attributes as shown in Figure 4 Figure 4: Including Object Classes with Special Object Class Attributes Object classes make LDAP extensible All the people at an organization have a core set of attributes that they share such as name address phone office department and job level You can wrap t hese attributes into an object class so that the definition of a person in the directory can reference the object class and automatically get all the common attributes defined by it Figure 5 shows an example of an object class This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 9 of 33 Figure 5: An Example of an Object Class Install OpenLDAP For this walkthrough you need to install OpenLDAP on the Amazon EC2 instance that you launched 1 Log in to the Amazon EC2 instance and enter the following commands to download and install OpenLDAP sudo apt get y update && sudo apt get y upgrade This command updates the package list on the host The second half of the command updates all the packages on the host to the newest versions 2 Type the following command to install OpenLDAP sudo apt get y install slapd ldap utils 3 Type the following commands to set up shortcuts (aliases) for working with OpenLDAP echo "alias ldapsearch='ldapsearch H ldapi:/// x W '" >> ~/bashrc echo "alias ldapmodify='ldapmodify H ldapi:/// x W '" >> ~/bashrc # Adding $LDAP_ADMIN to either of t he ldap commands binds to admin account echo "export LDAP_ADMIN=' D cn=admindc=exampledc=com '" >> ~/bashrc source ~/bashrc These commands add aliases to the ~/bashrc file which is a file that contains commands that run each time the user signs in The shortcuts add some common parameters to ldapsearch and ldapmodify the two This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 10 of 33 most common LDAP utilities The parameters for these commands are as follows: • H ldapi:/// tells the command where the directory is located • x tells the command to use simple authentication • W tells the command to ask for the password (instead of listing it on the command line) • D cn=admindc=exampledc=com is a set of parameters to indicate that LDAP should run as an administrator 4 Type the following command to tell the package manager to reconfigure OpenLDAP sudo dpkg reconfigure slapd When the command runs you see the following prompts Respond as noted • Omit OpenLDAP server configuration? Type No You want to have a blank directory created • DNS domain name: Type examplecom You use this to construct the hierarchy of the LDAP directory Use this domain for this walkthrough because other aspects of the configuration depend on this domain name • Organization name: Type any name This value is not used • Administrator password: (and confirmation) This is the LDAP administrator password For the purposes of this walkthrough use password For production systems consult your security best practices You will need the password when you make changes to the LDAP configuration later • Database backend to use: This lets you specify the storage back end for LDAP information Type HDB • Do you want the database to be removed when slapd is purged? Type Yes This is a safety measure in case you purge a setup and start over In that case if you type Yes the directory is backed up rather than deleted • Move old database? Type Yes This is part of the safety measure from the previous prompt By answering Yes you cause OpenLDAP to make a backup of the existing directory before wiping it This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 11 of 33 • Allow LDAPv2 protocol? Type No LDAPv2 is deprecated Download LDAP Sample Data For this walkthrough you need some data in the LDAP data store For convenience the walkthrough provides files that contain sample data To download these directly to the Amazon EC2 instance run the following script inside your instance wget O '/home/ubuntu/examplestargz' 'https://s3amazonawscom/awsiammedia/public/sample/OpenLDA PandShibboleth/examplestargz ' tar xf /home/ubuntu/examplestargz Configure OpenLDAP Because LDAP is text based it is easy to back up the directory and share attribute definitions (called schema s) However this paper does not focus on LDAP so it does not go into detail about the text format used to interact with LDAP You just need to know that Lightweight Directory Interchange Format (LDIF) is a textbased export/import format for LDAP and you can find the sample LDIFs for populating the directory in the files that you downloaded After you have downloaded the sample data files as described in the previous section run the following script to insert information from the example files into the LDAP database You need the LDAP administrator password that you specified when you installed and configured OpenLDAP sudo ldapmodify Y EXTERNAL H ldapi:/// f examples/eduPerson201310ldif # Schema installation requires root but all other changes onl y require admin ldapmodify $LDAP_ADMIN f examples/PEOPLEldif ldapmodify $LDAP_ADMIN f examples/BIOldif ldapmodify $LDAP_ADMIN f examples/CSEldif ldapmodify $LDAP_ADMIN f examples/HRldif Step 3: Install Tomcat and Shibboleth IdP The next step is to install Shibboleth Because Shibboleth is a construction of Java Server Pages it needs a container in which to run We are using Apache This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 12 of 33 Tomcat 11 You do not have to know much about Tomcat in order to use it in this walkthrough; we will show you the installation and configuration steps Install Tomcat The Tomcat installation is simple You just need to download and unzip a tarball In order to run Tomcat a Java SE Development Kit (JDK) is required Log in to the Amazon EC2 instance and run the following script in order to install the JDK download Tomcat and extract it sudo apt get y install openjdk 7jreheadless wget O 'installers/tomcat7targz' ' http://wwwusapacheorg/dist/tomcat/tomcat 7/v7059/bin/apache tomcat7059targz ' # Tomcat installation is simply to extract the tarball tar xzf installers/tomcat7targz C server/tomcat/ stripcomponents=1 Install Shibboleth IdP You can install Shibboleth by downloading a tarball and extracting it You then need to set an environment variable and run the Shibboleth installer script In the Amazon EC2 instance run the following script wget O 'installers/shibidp24targz' ' http://shibbolethnet/downloads/identity provider/24 0/shibboleth identityprovider 240 bintargz ' tar xzf installers/shibidp24targz C installers/shibidp strip components=1 # This is needed for Tomcat and Shibboleth scripts echo "export JAVA_HOME=/usr/lib/jvm/java 7openjdk amd64/" >> ~/bashrc source ~/bashrc # Installation directory: /home/ubuntu/server/shibidp # (don't use ~) # Domain: idpexamplecom cd installers/shibidp; /installsh && cd Use the following answers when prompted This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 13 of 33 • Where should the Shibboleth Identity Provider software be installed? Type /home/ubuntu/server/shibidp • (This question may not appear) The directory '/home/ubuntu/server/shibidp' already exists Would you like to overwrite this Shibboleth configuration? (yes [no]) Type yes • What is the fully qualified hostname of the Shibboleth Identity Provider server? [idpexampleorg] Type idpexamplecom (Use com not org because that is what the LDAP installation uses) Note that this response assumes that you typed examplecom as the domain earlier • A keystore is about to be generated for you Please enter a password that will be used to protect it This password protects a key pair that is used to sign SAML assertions It is stored in a file in the Shibboleth directory For purposes of this walkthrough use password everywhere you are prompted In a production system be sure to consult your security best practices Configure Tomcat Tomcat's default configuration does not quite suit our needs for this example IdP so you need to edit the server's configuration file 1 In the Amazon EC2 instance use an editor such as Vim to edit the following file /home/ubuntu/server/tomcat/conf/serverxml 2 Comment out the block that starts with <Connector port="8080" This stops Tomcat from listening on port 8080 3 Find the block that begins with <Connector port="8443" and replace it with the following block Notice that the block you are searching for contains the port 8443 and is being replaced with port 443 <Connector port="4 43" protocol="HTTP/11" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ubuntu/server/shibidp/credentials/idpj ks" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 14 of 33 keystorePass="password" /> 4 Create the following file /home/ubuntu/server/tomcat/conf/Catalina/localhost/idpxml 5 Add the following to the file you just created and then save and close the file <Context docBase="/home/ubuntu/server/shibidp/war/idpwar" privileged="true" antiResourceLocking="false" antiJARLocking="false" unpackWAR="false" swallowOutput="true" /> This tells Tomcat where Shibboleth’s files are and how to use them 6 Run the following command cp ~/installers/shibidp/endorsed/* ~/server/tom cat/endorsed This command tells Tomcat that it can run the Shibboleth library files by copying the contents of Shibboleth's endorsed directory to Tomcat's endorsed directory 7 Edit the Tomcat user store file that is in the following location /home/ubuntu/server/tomcat/conf/tomcat usersxml 8 Add a root user by adding the following line just before the </tomcat users> tag (inside the tomcatusers element) <user username="root" password="password" roles="manager gui" /> This configures Tomcat as an administrative user so that Tomcat can start and stop Shibboleth 9 Start the server by running the following startup commands This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 15 of 33 sudo /home/ubuntu/server/tomcat/bin/startupsh tail f /home/ubuntu/server/tomcat/logs/catalinaout 10 Wait for a line that says "INFO: Server startup in ### ms" and then press CTRL+C 11 To verify that Tomcat and Shibboleth started properly from your main computer (not the Amazon EC2 instance) navigate to https://idpexamplecom If the server is working Tomcat displays a welcome page after a brief warning about certificates and host names 12 Click Manager App and type the root credentials Verify that the Shibboleth software is running Step 4: Configure IAM Now that you have set up Shibboleth as an IdP configure AWS IAM so that it can act as a SAML service provider This involves two tasks: the first is to create an IAM SAML provider that describes the IdP and the second is to create an IAM role (in our case several roles) that a federated user can assume in order to get temporary security credentials for accessing AWS resources such as signing in to the AWS Management Console Create an IAM SAML Provider In order to support SAML identity federation from an external IdP IAM must first establish a trust relationship with the provider To do this create an IAM SAML provider SAML 20 describes a document called a metadata document that contains all the required information to configure communication and trust between two entities You can get the metadata document by asking Shibboleth running on your instance to generate it 1 In your Amazon EC2 instance navigate to the following URL download the metadata document and save it with the name idpexamplecomxml (use this name because later steps assume this name) https://idpexamplecom/idp/profile/Metadata/SAML 2 Sign in to AWS and navigate to the IAM console 12 3 In the navigation pane click Identity Providers and then click Create Provider The Create Provider wizard starts 4 Choose SAML as the provider type 5 Type ShibDemo as the name 6 Upload the metadata document you saved in Step 1 of this procedure as the Metadata Document as shown in Figure 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 16 of 33 Figure 6: The Create Provider Wizard 7 Click Next Step 8 Review the Provider Name and Provider Type and then click Create Create IAM Roles Next you create IAM roles that federated users can assume You create three roles for Example University: one for the biology department one for the computer science and computer engineering departments to share and one for the human resources department Shibboleth controls access to the first two roles The third role includes a condition so that Shibboleth and AWS manage access control (authorization) In the IAM console follow these steps: 1 In the navigation pane click Roles and then click Create New Role 2 Type BIO for the name of the first role and then click Next Step 3 For role type select Role f or Identity Provider Access 4 Select Grant Web Single SignOn (WebSSO) access to SAML providers as shown in Figure 7 Figure 7: Grant WebSSO Access to SAML Providers By default the wizard selects the SAML provider that you created earlier (see Figure 8) The wizard also shows that the Value field is set to https://signinawsamazoncom/saml This is a required value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 17 of 33 Figure 8: Create Role Wizard 5 Click Next Step and verify that the role trust policy matches the following example (except that your policy includes your AWS account number instead of 000000000000 ) When you have verified the policy click Next Step { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Action": "sts:AssumeRoleWithSAML" "Principal": { "Federated": "arn:aws:iam::000000000000:saml provider/ShibDemo" } "Condition": { "StringEquals": { "SAML:aud": "https://signinawsamazoncom/saml" } } } ] } 6 In the Attach Policy step do not selec t any options For this exercise the role does not actually need to have any permissions Instead click Next Step You see a summary of the role as shown in Figure 9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 18 of 33 Figure 9: Summary of the Created Role Note the role's Amazon Resource Name or ARN (arn:aws:iam::000000000000 0:role/BIO ) Later parts of this walkthrough assume that the ARNs of the roles you create in this procedure match the suggested names (BIO CSE and HR) 7 Click Create Role to finish creating this role 8 Repeat steps 1–7 to create another role named CSE 9 Repeat the steps again to create another role named HR For the HR role you need to add a condition to check that at least one of the values of the SAML:eduPersonPrimaryOrgUnitDN attribute is a string that is required When you get to the Verify Role Trust step copy and paste the following policy Remember to replace 000000000000 with your AWS account number { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Action": "sts:AssumeRoleWith SAML" "Principal": { "Federated": "arn:aws:iam::000000000000:saml provider/ShibDemo" } "Condition": { "StringEquals": { "SAML:aud": "https://signinawsamazoncom/saml" } "ForAnyValue :StringEquals": { "SAML:eduPersonPrimaryOrgUnitDN": "ou=hrdc=exampledc=com" } } This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 19 of 33 } ] } The extra condition restricts the HR role to the manager of HR because Example University uses the eduPersonP rimaryOrgUnitDN attribute to denote managers 10 As with the BIO and CSE roles do not select any policies to attach as the role's access policy because no permissions are needed for this walkthrough Step 5: Configure Shibboleth IdP Shibboleth IdP consumes data from a variety of sources and uses that data to both authenticate a user and communicate the authenticated identity to external entities You can configure nearly every part of the process and you can extend with code the portions of the IdP that do not support configuration settings About Shibboleth Data Connectors The basic flow for attribute data through Shibboleth is the same regardless of whether the data comes from a database LDAP or another source A component called a data connector fetches attribute data from its source The data connector defines a query or filter used to get the identity data Predefined data connectors exist for relational databases LDAP and configuration files The results returned by the data connector persist into the next step in the process which is the attribute definition In this step you can process the identity data pulled from the store (and potentially from other attributes defined earlier in the configuration) to produce attributes with the format you need For example an attribute can pull several columns of a relational database together with appropriate delimiters and format an email address Like data connectors Shibboleth supports predefined attribute definitions One definition passes identity values through with no modification With the mapped attribute definition you can use regular expressions to transform the format of attributes A number of special attribute definitions expose some of Shibboleth's internal mechanisms which are interesting but will not be used here However these attributes are still in a Shibbolethspecific internal format You can attach attribute encoders to the attribute definitions so that you can serialize the internal attributes into whatever wire format you need This walkthrough uses the SAML 20 string encoder to create the required XML for the SAML authentication responses After you have fetched transformed and encoded data into the correct format you can use attribute filters to dictate which attributes to include in communication with various relying parties Predefined attribute filter policies give you great flexibility in releasing attributes to relying parties You can use filters to write attributes to specific relying parties only and to write only specific This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 20 of 33 values of the attributes specific users or specific authentication methods You can also string together Boolean combinations of all the above A complete overview of the process appears in Figure 10 Figure 10: Attribute Pipeline in Shibboleth Fetch Attributes from OpenLDAP Much of the configuration for getting Shibboleth to communicate with OpenLDAP is already in existing files and just needs to be uncommented 1 In your Amazon EC2 instance open this file in your text editor /home/ubuntu/server/shibidp/conf/attribute resolverxml 2 In the file find the section with the following heading # <! Schema: eduPerson attributes > 3 Uncomment that section (The commentedout section ends before an element that has the ID eduPersonTargetedID ) 4 If you are using a newer schema that includes the definitions for eduPersonPrincipalNamePrior or eduPersonUniqueId (the eduPerson object class specification 201310) you can optionally add the following block after the block that you just uncommented <resolver:AttributeDefinition xsi:type="ad:Simple" id="eduPersonPrincipalNamePrior" sourceAttributeID="eduPersonPrincipalNamePrior"> <resolver:Dependency ref="myLDAP" /> <resolver:AttributeEncoder xsi:type="enc:SAML1String" name="urn:mace:dir:attribute def:eduPersonPrincipalNamePrior" /> <resolver:AttributeEncoder This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 21 of 33 xsi:type="enc:SAML2String" name="urn:oid:136141592311112" friendlyName="eduPersonPrincipalNamePrior" /> </resolver:AttributeDefinition> <resolver:AttributeDefinition xsi:type="ad:Simple" id="eduPersonUniqueId" sourceAttributeID="eduPersonUniqueId"> <resolver:Dependency ref="myLDAP" /> <resolver:AttributeEncoder xsi:type="enc:SAML1String" name="urn:mace:dir:attribute def:eduPersonUniqueId" /> <resolver:AttributeEncoder xsi:type="enc:SAML2String" name="urn:oid:136141592311113" friendlyName="eduPersonUniqueId" /> </resolver:AttributeDefinition> 5 Find the section that begins with the following <! Example LDAP Connector > This section has been commented out 6 Replace that entire commentedout section with the following block and then save and close the file <resolver:DataConnector id="myLDAP" xsi:type="dc:LDAPDirectory" ldapURL="ldap:///" baseDN="ou=peopledc=exampledc=com" authenticationType="ANONYMOUS" > <dc:FilterTemplate> <![CDATA[ (uid=$requestContextprincipalName) ]]> </dc:FilterTemplate> </resolver:Dat aConnector> This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 22 of 33 About Attribute Definitions The most relevant part of an LDAP data connector block is the filter template near the bottom of the definition When Shibboleth requests attributes for a user it runs this query on the OpenLDAP database OpenLDAP needs to authenticate and needs to know where to search This is what the authenticationType and baseDN attributes define The reference myLDAP is used to refer to this specific OpenLDAP query If there are other attributes in OpenLDAP that require a different query you can copy this block give it a different ID and change the query The block contains the following eduPerson attribute definition <resolver:AttributeDefinition xsi:type="ad:Simple" id="eduPersonAffiliation" sourceAttributeID=" eduPersonAffiliation"> <resolver:Dependency ref="myLDAP" /> <resolver:AttributeEncoder xsi:type="enc:SAML1String" name="urn:mace:dir:attribute def:eduPersonAffiliation" /> <resolver:AttributeEncoder xsi:type="enc:SAML2Strin g" name="urn:oid:13614159231111" friendlyName="eduPersonAffiliation" /> </resolver:AttributeDefinition> The xsi:type="ad:Simple" attribute in these definitions indicates that these attributes simply copy their values from the data connector as is This is appropriate for attributes that map directly to single columns of a database to single attributes from OpenLDAP or to static configuration data The id="eduPersonAffiliation" portion gives this configuration section an internal name that can be referenced elsewhere in the configuration It is never released to relying parties The sourceAttributeID="eduPersonAffiliation" portion defines the name of the attribute released by the data connector to use as the source of data for this attribute definition Because this attribute definition gets data from OpenLDAP the configuration specifies a dependency on myLDAP which is the ID that you assigned to the OpenLDAP data connector Finally a number of encoders are attached In the SAML 20 string encoder the name and friendlyName are used to set the same portions of a SAML2 attribute This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 23 of 33 Configuring AWSspecific attribute definitions To use SAML identity federation with AWS you must configure two AWSspecific attributes The first is a simple attribute that sets the name of the session granted to users This value is captured in logs and displayed in the console when the user signs in Good candidates for this value are a user's login name or email address Some format restrictions exist for the value: • It must be between 2 and 32 characters in length • It can contain only alphanumeric characters underscores and the following characters: +=@ • It is typically a user ID (bobsmith) or an email address (bobsmith@examplecom) • It should not include blank spaces such as often appear in a user’s display name (Bob Smith) This example uses the uid of the user from OpenLDAP by setting the sourceAttributeID to uid and adding a dependency on the OpenLDAP data connector The other attribute that needs to be set is the list of roles the user can assume This could be as simple as a static value attached to all users in an organization or as complex as a per user per department ACL (access control list)–based value This example uses a flexible option that is not difficult to implement To configure the attributes follow these steps 1 Edit the following file /home/ubuntu/server/shibidp/conf/attribute resolverxml 2 Insert the following block immediately after the heading "Attribute Definitions" and before <!Schema: Core schema attributes> Note: Replace 000000000000 with your AWS account number Note also that the block includes the ARNs of the roles that you created earlier (for example arn:aws:iam::000000000000:role/BIO ) <resolver:AttributeDefinition id="awsRoles" xsi:type="ad:Mapped" sourceAttributeID="eduPersonOrgUnitDN"> <resolver:Dependency ref="myLDAP"/> <resolver:AttributeEncoder xsi:type="enc:SAML2String" name="https://awsamazoncom/SAML/Attributes/R ole" friendlyName="Role" /> This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 24 of 33 <ad:ValueMap> <ad:ReturnValue> arn:aws:iam::000000000000:role/BIOarn:aws:iam::00000000000 0:samlprovider/ShibDemo </ad:ReturnValue> <ad:SourceValue>*ou=biology*</ad:SourceValue> </ad:ValueMap> <ad:ValueMap> <ad:ReturnValue> arn:aws:iam::000000000000:role/CSEarn:aws:iam::00000000000 0:samlprovider/ShibDemo </ad:ReturnValue> <ad:SourceValue>*ou=computersci*</ad:SourceValue> <ad:SourceValue>*ou=computereng*</ad:SourceValue> </ad:ValueMap> <ad:ValueMap> <ad:ReturnValue> arn:aws:iam::000000000000:role/HRarn:aws:iam::000000000000 :samlprovider/ShibDemo </ad:ReturnValue> <ad:SourceValue>*ou=hr*</ad:SourceValue> </ad:ValueMap> </resolver:AttributeDefinition> <resolver:AttributeDefinition id="awsRoleSessionName" xsi:type="ad:Simple" sourceAttributeID="uid"> <resolver:Dependency ref="myLDAP"/> <resolver:AttributeEncoder xsi:type="enc:SAML2String" name="https://awsamazoncom/SAML/Attribu tes/RoleSessionNam e" friendlyName="RoleSessionName" /> </resolver:AttributeDefinition> With the mapped attribute definition you can use a regular expression to map input values into output values This example maps eduPersonOrgUnitDN to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 25 of 33 an IAM role (depending on the organizational unit) in order to give entire departments access to resources by using existing access control rules The attribute definition contains several value maps each with its own pattern Each of the values associated with the eduPersonOrgUnitDN (because it is multivalued) is checked against the patterns specified in the SourceValue nodes If the check finds a match the ReturnValue value is added to the attribute definition The format of the ReturnValue is a role ARN and a prov ider ARN separated by a comma The order of the two ARNs does not matter If you are using regular expressions in the SourceValue fields you can use back references in the ReturnValue so that you can simplify the configuration by capturing the organizational unit and using a back reference although delving into further possibilities of using pattern matching is beyond our scope Release Attributes to Relying Parties Sometimes attributes can contain sensitive data that is useful for authentication within the organization No one should release the sensitive data outside of the organization The first part of an attribute filter defines to whom the filter applies By u sing an AttributeRequesterString filter policy an administrator can choose the relying parties to whom to release the attributes This example uses the entity ID of AWS "urn:amazon:webservices" This walkthrough uses a simple directory so all possible values of all the eduPerson and AWS attributes are released to AWS This allows you to write policies in IAM that can include conditions based on attributes that represent OpenLDAP information You do this by including an AttributeRule element for each eduPerson entity or AWS attribute and setting PermitValueRule to basic:ANY 1 Edit the following file /home/ubuntu/server/shibidp/conf/attribute filterxml 2 Add the following block inside the element AttributeFilterPolicyGroup (before the closing </afp:AttributeFilterPolicyGroup> tag and after the comments) When you are done save and cl ose the file <afp:AttributeFilterPolicy id="releaseEduAndAWSToAWS"> <afp:PolicyRequirementRule xsi:type="basic:AttributeRequesterString" value="urn:amazon:webservices" /> <afp:AttributeRule attributeID="eduPersonAffiliation"> <afp:PermitValueRule xsi:type="basic:ANY"/> This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 26 of 33 </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonEntitlement"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonN ickname"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonOrgDN"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="edu PersonOrgUnitDN"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonPrimaryAffiliation"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonPrimaryOrgUnitDN"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonPrincipalName"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonScopedAffiliation"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonAssurance"> <afp:PermitValueRule xsi:type="ba sic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="eduPersonTargetedID"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="awsRoles"> <afp:PermitValueRule xsi:typ e="basic:ANY"/> </afp:AttributeRule> <afp:AttributeRule attributeID="awsRoleSessionName"> <afp:PermitValueRule xsi:type="basic:ANY"/> </afp:AttributeRule> </afp:AttributeFilterPolicy> This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 27 of 33 Enable Login Using OpenLDAP as a User Store Shibboleth supports several authentication methods By default remote user authentication is configured which passes through authentication from Tomcat To authenticate against OpenLDAP you must disable remote user authentication and enable user name/password authentication User name/password authentication via JAAS and the loginconfig file are already defined in the configuration file; you just need to uncomment it Follow these steps: 1 In the Amazon EC2 instance edit the following file /home/ubuntu/server/ shibidp/conf/handlerxml 2 Comment out the following block <ph:LoginHandler xsi:type="ph:RemoteUser"> 3 Uncomment the following block and then save and close the file <ph:LoginHandler xsi:type="ph:UsernamePassword > 4 Edit the following file in order to configure the OpenLDAP connection parameters /home/ubuntu/server/shibidp/conf/loginconfig 5 Find the block that begins with Example LDAP authentication Replace the entire commented section (which begins with eduvtmiddleware ) with the following block eduvtmiddlewareldapjaasLdapLoginModule required ldapUrl="ldap://localhost" baseDn="ou=Peopledc=exampledc=com" bindDn="cn=admindc=exampledc=com" bindCredential="password" userFilter="uid={0}"; Configure Shibboleth to Talk to AWS Now you have an OpenLDAP directory and Shibboleth configured to use that identity store and you have created IAM entities that AWS needs to establish This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 28 of 33 trust with Shibboleth The only thing left is to establish trust between Shibboleth (as the IdP) and AWS (as a service provider) You do this by configuring Shibboleth with the location of the AWS SAML 20 metadata document A metadata document contains all the information needed for two parties to communicate such as Internet endpoints and public ke ys Shibboleth can automatically refresh AWS metadata when AWS changes it by using a FileBackedHTTPMetadataProvider object Alternatively if an administrator wants to control the relationship manually the administrator can manually download the metadata and use a FileSystemMetadataProvider 1 In your Amazon EC2 instance edit the following file /home/ubuntu/server/shibidp/conf/relying partyxml 2 In the Metadata Configuration section just below the IdPMD entry add the following <metadata:MetadataProvider id="AWS" xsi:type="metadata:FileBackedHTTPMetadataProvider" metadataURL="https://signinawsamazoncom/static/saml metadataxml" backingFile="/home/ubuntu/server/shibidp/metadata/awsxml" /> The file contains settings that cause Shibboleth to apply a set of default configurations to AWS You can find these settings inside the DefaultRelyingParty and AnonymousRelyingParty blocks 3 To change the configuration for a specific relying party insert the following block after the DefaultRelyingParty block (after the closing </DefaultRelyingParty> tag) <rp:RelyingParty id="urn:amazon:webservices" provider="https://idpexamplecom/idp /shibboleth" defaultSigningCredentialRef="IdPCredential"> <rp:ProfileConfiguration xsi:type="saml:SAML2SSOProfile" includeAttributeStatement="true" assertionLifetime="PT5M" assertionProxyCount="0" This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 29 of 33 signResponses="never" si gnAssertions="always" encryptAssertions="never" encryptNameIds="never" includeConditionsNotBefore="true" maximumSPSessionLifetime="PT1H" /> </rp:RelyingParty> With This configuration you can specify the following: • defaultSignin gCredentialRef – The keys used to sign and encrypt requests • ProfileConfiguration – Which SAML 1x or SAML 20 profiles to respond to Keep in mind that AWS supports only SAML2SSOProfile • assertionLifetime – The length of time (expiration) for the user to provide the authentication information to AWS before it is no longer valid • signResponses/signAssertions – The portions of the response to sign • maximumSPSessionLifetime – The length of a session that AWS should provide based on the authentication information provided Test Configuration Changes by Using AACLI You have configured Shibboleth! To apply the Shibboleth configuration changes you must restart Tomcat However before you do that it is best to test the configuration You can use the attribute authority command line interface (AACLI) tool to simulate Shibboleth's attribute construction based on an arbitrary configuration directory This allows you to copy a working configuration to a test directory modify it test it and then copy it back For the sake of this example you set up AACLI to test the live configuration 1 Edit the following file ~/bashrc 2 Add the following block to the file and then save and close the file echo "alias aacli='sudo E /home/ubuntu/server/shibidp/bin/aaclish configDir=/home/ubuntu/server/shibidp/conf' " >> ~/bashrc 3 Run the following source command source ~/bashrc This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 30 of 33 4 Run the following AACLI command aacli requester "urn:amazo n:webservices" principal bobby The attributes that are constructed for a given principal can be tested by filling in a principal's OpenLDAP uid (In this case you use the principal bobby which exists in the example LDAP database) If all goes well the command displays XML information that could be directly injected into a SAML 20 attribute statement block If you see a series of stack traces instead a misconfiguration is present Check the settings for the OpenLDAP data connector and the syntax of all the XML configuration files 5 After the AACLI begins returning attributes stop and then restart Tomcat by using the following commands sudo /home/ubuntu/server/tomcat/bin/shutdownsh sudo /home/ubuntu/server/tomcat/bin/startupsh Ensure that no stack traces occur in Tomcat or in the Shibboleth logs Step 6: Test Shibboleth Federation As soon as the previous testing is working you can test federation to AWS In the Amazon EC2 instance open a browser and navigate to the following URL https://idpexamplecom/idp/profile/SAML2/Unsolicited/SSO?p roviderId=urn:amazon:webservices This initiates the SSO flow to AWS and you see the page shown in Figure 11 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 31 of 33 Figure 1 1: The Custom Login Page for the AWS Management Console Type the user name bobby and use password for the password (In the sample LDAP data all the passwords are password ) You then go to the AWS Management Console as shown in Figure 12 Figure 1 2: Console for a User Logged In as Charlie Using a Role Named CSE To try a different user log out by navigating to https://idpexamplecom/idp/profile/Logout Then try logging in as user Dean Notice that this user is unable to federate This is because the HR role policy specifies that the SAML:eduPersonPrimaryOrgUnitD N must be ou=hrdc=exampledc=com The user bobby has this and can federate as a member of the HR department However Dean's primary organizational unit is ou=Peopledc=exampledc=com As noted earlier administrators have the flexibility to control access in two places The first place is on the Shibboleth side in the attribute resolver by attaching specific AWS role attributes to specific users The role that is associated with a user then determines what the user can do in AWS The second place is in the IAM role trust policy where you can add conditions based on SAML This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 32 of 33 attributes that limit who can assume the role It is up to you to choose which of these two strategies to use (or both) For a complete list of attributes that you can use in role trust policies see the IAM documentation 13 Conclusion Now that you have integrated your onpremises LDAP infrastructure into IAM you can spend less time on synchronizing permissions between onpremises and the cloud The combination of SAML attributes and RBAC means you can author finegrained access control policies that address your LDAP user data an d your AWS resources Further Reading For more information about installing and configuring OpenLDAP and Shibboleth see the following: • Installing an OpenLDAP server 14 • How To Install and Configure a Basic LDAP Server on an Ubuntu 1204 VPS15 • LDIF examples16 • Edit the Tomcat Configuration File17 • Preparing Apache Tomcat for the Shibboleth Identity Provider18 For Shibboleth attributes and authentication responses the Shibboleth documentation wiki provides extensive information These topics contributed to the creation of this tutorial: • LDAP Data Connector19 • Shibboleth attributes: o Define and Release a New Attribute in an IdP20 o Simple Attribute Definition21 o Mapped Attribute Definition22 o Define a New Attribute Filter23 • Shibboleth User Name/Password Handler24 • Adding Metadata providers25 • PerService Provider Configuration26 Notes 1 http://enwikipediaorg/wiki/Ldap 2 http://awsamazoncom/aboutaws/whatsnew/2013/11/11/aws identityand accessmanagementiamadds supportfor samlsecurityassertionmarkup language20/ 3 See the “Install Tomcat” section 4 See the “Install Shibboleth IdP” section 5 See the “Configure Shibboleth IdP” section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Single Sign On: Integrating AWS OpenLDAP and Shibboleth April 2015 Page 33 of 33 6 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AccessingInstancesh tml 7 http://docsawsamazoncom/awsaccountbilling/latest/aboutv2/billing free tierhtml 8 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EC2_GetStartedhtm l 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AccessingInstancesh tml 10 http://enwikipediaorg/wiki/Ldap 11 http://tomcatapacheorg/ 12 https://consoleawsamazoncom/iam/home?#home 13 http://docsawsamazoncom/IAM/latest/UserGuide/AccessPolicyLanguage_E lementDescriptionshtml#conditionkeyssaml 14 https://helpubuntucom/lts/serverguide/openldap serverhtml#openldap serverinstallation 15 https://wwwdigitaloceancom/community/articles/howtoinstalland configureabasicldapserveronanubuntu 1204vps 16 http://wwwzytraxcom/books/ldap/ch5/step4html#step4ldif 17 http://tomcatapacheorg/tomcat70 doc/ssl howtohtml#Edit_the_Tomcat_Configuration_File 18 https://wikishibbolethnet/confluence/display/SHIB2/IdPApacheTomcatPrep are 19 https://wikishibbolethnet/confluence/display/SHIB2/ResolverLDAPDataCo nnector 20 https://wikishibbolethnet/confluence/display/SHIB2/IdPAddAttribute 21 https://wikishibbolethnet/confluence/display/SHIB2/ResolverSi mpleAttribu teDefinition 22 https://wikishibbolethnet/confluence/display/SHIB2/ResolverMappedAttrib uteDefinition 23 https://wikishibbolethnet/confluence/display/SHIB2/IdPAddAttributeFilter 24 https://wikishibbolethnet/confluence/display/SHIB2/IdPAuthUserPass 25 https://wikishibbolethnet/confluence/display/SHIB2/IdPMetadataProvider 26 https://wikishibbolethnet/confluence/display/SHIB2/IdPRelyingParty
|
General
|
consultant
|
Best Practices
|
Sizing_Cloud_Data_Warehouses
|
Sizing Cloud Data Warehouses Recommended Guidelines to Sizing a Cloud Data Warehouse January 2019 This document has been archived For the latest technical content about cloud data warehouses see the AWS Whitepapers & Guides page: https//awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s prod ucts or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of no r does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Sizing Guidelines 2 Redshift Cluster Resize 4 Conclu sion 5 Contributors 6 Document Revisions 6 ArchivedAbstract This whitepaper describes a process to determine an appropriate configuration for your migration to a cloud data warehouse This process is appropriate for typical data migrations to a cloud data warehouse such as Amazon Redshift The intended audience includes database administrators data engineers data architects and other data warehouse stakeholders Whether you are performing a PoC (proof of concept) evaluation a production migration or are migrating from an on premises appliance or another cloud data warehouse you can follow the simple guide lines in this whitepaper to help you increase the chances of delivering a data warehouse cluster with the desired storage performance and cost profile ArchivedArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 1 Introduction One of the first tasks of migrating to any data warehouse is s izing the data warehouse appropriately by determining the appropriate number of cluster nodes and their compute and storage profiles Fortunately with cloud data warehouses such as Amazon Redshift it is a relatively straightforward task to make immediate course corrections to resize your cluster up or down However sizing a cloud data warehouse based on the wrong type of information can lead to your PoC evaluations and production environments being executed on suboptimal cluster configurations Resizing a cluster might be easy but repeating PoCs and dealing with change control procedures for production environments can potentially be more time consuming risky and costly which puts your project milestones at risk Migrations of several petabyt es to exabytes of uncompressed data typically benefit from a more holistic sizing approach that factors in existing data warehouse properties data profiles and workload profiles Holistic sizing approach es are more involved and fall under the category of professional services engagement For more information contact AWS ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 2 Sizing Guidelines For migrations of approximately one petabyte or less of uncompressed data you can use a simple storage centric sizing approach to identify an appropriate data wareh ouse cluster configuration With the simple sizing approach your organization’s uncompressed data size is the key input for sizing your Redshift cluster However you must refine that size a little Redshift typically achieves 3x –4x data compression whic h means that the data that is persisted in Redshift is typically 3 –4 times smaller than the amount of uncompressed data In addition it is always a best practice to maintain 20% of free capacity in a Redshift cluster so you should increase your compresse d data size by a factor of 125 to ensure a healthy amount (20%) of free space The simple sizing approach can be represented by this equation : This equation is appropriate for typical data migrations but it is important to note that suboptimal data modelling practices could artificially lead to insufficient storage capacity Amazon Redshift has four basic node types —or instance types —with differ ent storage capacities For more information on Redshift instance types see the Amazon Redshift Clusters documentation ArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 3 Basic Re dshift cluster information Instance Family Instance Name vCPU s Memory Storage # Slices Dense storage ds2xlarge 4 31GiB 2TB HDD 2 ds28xlarge 36 244GiB 16TB HDD 16 Dense compute dc2large 2 1525GiB 160GB SSD 2 dc28xlarge 32 244GiB 256TB SSD 16 In an example scenario the f ictitious company Examplecom would like to migrate 100TB of uncompressed data from its on prem ises data warehouse to Amazon Redshift Using a conservative compression ratio of 3x you can expect that the compressed data prof ile in Redshift wil decrease from 100TB to approximately 33TB You factor in an additional 20% size increase to ensure a healthy amount of free space which will give you approximately 42TB of storage capacity in your Redshift cluster You now have your ta rget storage capacity of 42TB There are multiple Redshift cluster configurations that can satisfy that requirement The Examplecom VP of Data Analytics wants to start out small select the least expensive option for their cloud data warehouse and then scale up as necessary With that extra requirement you can configure your Redshift cluster using the dense storage ds2xlarge node type which has 2TB of storage capacity With this information your simple sizing equation is: ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 4 You should also consider the following information about this example Redshift cluster configuration: Cluster Type Instance Type Nodes Cluster Capacity Cost ($/month) Memory Compute Storage Dense storage ds2xlarge 21 651Gb 84 Cores 42TB $x ds28xlarge 3 732Gb 108 Cores 48TB $12x Dense compute dc28xlarge 17 4148 Gb 544 Cores 44TB $452x If initial testing shows that the Redshift cluster you selected is under or over powered you can use the straightforward resizing capabilities available in Redshift to scale the Redshift cluster configuration up or down for the necessary price and performance Redshift Cluster Resize After your data migration from your on premises data warehouse to the cloud is complete over time it is normal to make incremental node additions or removals from your cloud data warehouse These changes help you to maintain the cost storage and performance profiles you need for your data warehouse With Amazon Redshift there are two main methods to resize your cluster: • Elastic resize – Your existing Redshift cluster is modified to add or remove nodes either manually or with an API call This resize typically requires approximately 15 minutes to complete Some tasks might continue to run in the background but your Redshift cluster is fully available during that time • Classic resize – Enables a Redshift cluster to be reconfigured with a different node count and instance type The original cluster enters read only mode during the resize which can take multiple hours In addition Amazon Redshift supports concurrency based scaling which is a feature that adds transient capacity to your cluster during concurrency spikes This in effect temporar ily increas es the number of Amazon Redshift nodes processing your queries With concurrency scaling Redshift automatically adds transient clusters to your Redshift cluster to handle concurrent requests with consistently fast performance This means that your Redshift c luster is temporarily scaled up with additional compute nodes to provide increased concurrency and consistent performance ArchivedAmazon Web Services Sizing Cloud Data Warehouses Page 5 For more information about resizing a Redshift cluster see: • Resizing a Cluster (Redshift Documentation) https://docsawsamazoncom/redshift/latest/mgmt/working with clustershtml#cluster resize intro • Elastic Resize (Redshift Documentation) https://awsamazoncom/about aws/whats new/2018/11/amazon redshift elastic resize/ • Elastic Resize (Blog Post) https://awsamazoncom/blogs/big data/scale your amazon redshift clusters upanddown inminute stogettheperformance youneed when youneed it/ • Concurrency Scaling (Blog Post) https://wwwallthingsdistributedcom/2018/11/amazon redshift performance optimizationhtml Conclusion It is important that you size your cloud data warehouse using the right information and approach Although it is easy to resize a cloud data warehouse (such as Amazon Redshift ) up or down to achieve a different cost or performance profile the change control procedures for modifying a production environment repeating a PoC evaluation etc could pose significant challenges to project milestones You can f ollow the simple sizing approach outlined in this whitepaper to he lp you identify the appropriate cluster configurations for your data migration ArchivedSizing Cloud Data Warehouses Amazon Web Services Page 6 Contributors Contributors to this document include: • Asser Moustafa Solutions Architect Specialist Data Warehousing • Thiyagarajan Arumugam Solutions Architect Specialist Data Warehousing Document Revisions Date Description January 2019 First publication Archived
|
General
|
consultant
|
Best Practices
|
SoftNAS_Architecture_on_AWS
|
ArchivedSoftNAS Architecture on AWS April 201 7 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers SoftNAS and the SoftNAS logo are trademarks or registered trademarks of SoftNAS Inc All rights reserved ArchivedContents Introduction 1 About SoftNAS Cloud 1 Architecture Considerations 1 Application and Data Security 1 Performance 3 Using Amazon S3 with SoftNAS Cloud 9 Network Security 10 Data Protection Considerations 13 SoftNAS Cloud is Copy OnWrite (COW) File System 14 Automatic Error Detection and Correction 14 SoftNAS Cloud Snapshots 15 SoftNAS SnapClones™ 16 Amazon EBS Snapshots 17 Deployment Scenarios 17 HighAvailability Architecture 17 Single Controller Architecture 20 Hybrid Cloud Architecture 21 Automation Options 23 Conclusion 25 Contributors 25 Further Reading 26 SoftNAS References 26 Amazon Web Services References 26 ArchivedAbstract Network Attached Storage (NAS) software is commonly deployed to provide shared file services data protection and high availability to users and applications SoftNAS Cloud a popular NAS solution that can be deployed from the Amazon Web Services (AWS) Marketplace is designed to support a variety of market verticals use cases and workload types Increasingly SoftNAS Cloud is deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS) Network File System (NFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) This paper addresses architectural considerations when deploying SoftNAS Cloud on AWS It also provides best practice guidance for security performance high availability and backup ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 1 Introduction Network Attached Storage (NAS) systems enable data and file sharing and are used for businesscritical applications and data management NAS syste ms are optimized to balance performance interoperability data reliability and recoverability Although widely deployed by IT in traditional data center environments NAS software is increasingly used on AWS a flexible cost effective easy touse cloudcomputing platform Deploying NAS on Amazon Elastic Compute Cloud (Amazon EC2) provides a solution for applications that require the benefits of NAS storage in a software form factor1 About SoftNAS Cloud SoftNAS Cloud is a softwaredefined NAS filer delivered as a virtual appliance running within Amazon EC2 SoftNAS Cloud provides NAS capabilities suitable for the enterprise including MultiAvailability Zone (Multi AZ) high availability with automatic failover in the AWS Cloud SoftNAS Cloud which runs within the customer’s AWS account offers businesscritical data protection required for nonstop operation of applications websites and IT infrastructure on AWS This paper doesn’ t cover all SoftNAS Cloud features For more information see wwwsoftnascom 2 Architecture Considerations This section provides information critical to a successful SoftNAS Cloud installation This information includes application an d data security performance interaction with Amazon Simple Storage Service (Amazon S3) 3 and network security Application and Data Security Security and protection of customer data are the highest priorities when working with SoftNAS Cloud on AWS When you use SoftNAS Cloud in conjunction with AWS security features such as Amazon Virtual Private Cloud (Amazon VPC) 4 Amazon VPC Security Groups and AWS Identity and Access Management (IAM) roles you can deploy a secure data storage solution ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 2 SoftNAS Cloud uses the CentOS Linux distribution which is managed updated and patched as part of a normal release cycle You can use SoftNAS StorageCenter ™ the webaccessible SoftNAS Cloud administration console to check the current software revision and apply available updates For security and compliance reasons the SoftNAS technical support team should approve any custom package before it is installed on a SoftNAS Cloud instance Webbased administration through SoftNAS StorageCenter is SSLencrypted and passwordprotected by default Optional twofactor authentication is also available for use You can administer SoftNAS Cloud through SSH and a secure REST API On AWS all SSH sessions use public/private key access control Logging in as root is prohibited Administrative access through the API and command line interface (CLI) over SSH are SSLencrypted and authenticate d Iptables a commonly used software firewall is included with SoftNAS Cloud and can be customized to accommodate more restrictive and finergrained security controls Data access is performed across a private network by Network File System (NFS) Common Internet File System (CIFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) You can also restrict the list or range of client addresses allowed to perform data access SoftNAS Cloud offers encryption options for data security – both in flight and at rest If NFS is used all Linux authentication schemes are available including Network Information Service (NIS) Lightweight Directory Access Protocol (LDAP) Kerberos and restrictions based on the user ID (UID) and group ID (GID) Using CIFS you manage security through SoftNAS StorageCenter facilitating basic Windows user and group permissions Active Directory integration is supported for more advanced user and permissions management in Windows environments The SnapReplicate ™ feature provides blocklevel replication between two SoftNAS Cloud instances SnapReplicate between source and target SoftNAS Cloud instances sends all data through encrypted SSH tunnels and authenticates using RSA (public key infrastructure PKI ) Data is encrypted in transit using industrystandard ciphers The default cipher for encryption is BlowfishCBC selected for its balance of speed and security However you can use any cipher supported by SSH including AES256bitCBC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 3 SoftNAS Clo ud uses the IAM service to control the SoftNAS Cloud appliance’s access to other AWS services5 IAM roles are designed to allow applications to securely make API calls from an instance without requiring the explicit management and storage of access keys When an IAM role is applied to an EC2 instance the role handles key management rotating keys periodically and making them available to applications through Amazon EC2 metadata Performance The performance of a NAS system on Amazon EC2 depends on many factors including the Amazon EC2 instance type the number and configuration of Amazon Elastic Block Store (Amazon EBS) volumes6 the type of Amazon EBS volume the use of Provisioned IOPS with Amazon EBS and the application workload Benchmark your application on several Amazon EC2 instance types and storage configurations to select the most appropriate configuration SoftNAS Cloud provides Amazon Machine Images (AMIs ) that support both paravirtual (PV) and hardware virtual machine (HVM) virtualization To take advantage of special hardware extensions (CPU network and storage) and for optimal performance SoftNAS recommends that you use a current generation instance type and an HVM AMI with single root input/output virtualization (SRIOV) support To increase the performance of your system you need to know which of the server’s resources is the performance constraint If CPU or memory limits your system performance you can scale up the memory compute and network resources available to the software by choosing a larger Amazon EC2 instance type Use StorageCenter dashboard performance charts and Amazon CloudWatch to monitor your performance and throughput metrics7 Depending on the instance type and size chosen EC2 instances are allocated varying amounts of CPU memory and network capabilities Some instance families have higher ratios of CPU to memory or higher ratios of memory to CPU In general to achieve the best performance from your SoftNAS Cloud virtual appliance select an instance with large amounts of memory up to 70 percent of which will be dedicated to highspeed dynamic randomaccess memory (DRAM ) cache If you require more than 120 MB/s NAS throughput for more demanding use cases select an instance with advanced networking AWS provides instances that support 10 and 20 Gbps network interfaces If available ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 4 choose an EBSoptimized instance which uses a dedicated network path to EBS storage For production workloads SoftNAS recommends starting with a larger EC2 instance size coupled with monitoring of CloudWatch metrics as workloads are increased to their typical levels This ensures applications have sufficient IOPS and throughput as they’re brought online Continue monitoring the application using SoftNAS StorageCenter and CloudWatch metrics in particular CPU and network usage to determine how well the chosen instance size is serving your unique workloads After a period of time (eg 30 days) with your workload in production it will become apparent if the instance is well matched to the production workloads As your load increases if CPU or network usage reaches 75 percent or higher you might need to increase instance si ze to achieve full throughput at low latencies If CPU and network usage are below 40 to 50 percent you can consider decreasing the instance size during a maintenance window to reduce operating costs SoftNAS does not recommend using T1 or T2 instances as they are designed for burstable workloads and can run out of CPU credits during sustained heavy usage At the time of this writing SoftNAS recommends the m42xlarge as a minimum default AWS instance size the m44xlarge for medium workloads and the m410xlarge for heavier workloads as seen in Figure 1 below A SoftNAS representative can help with further sizing guidance About RAM Usage SoftNAS Cloud allocates 50 percent of available RAM for use as Zettabyte File System (ZFS) file system cache Remaining RAM is used by the Linux operating system SoftNAS Cloud processes and NAS services It’s typical to see 80 to 90 percent of RAM allocated and in use ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 5 Later instance families also supported Figure 1: AWS instance to workload If your performance is limited by disk I/O you can make configuration changes to improve the performance of your disk and caching resources Multilevel Cache Readintensive workloads benefit from additional RAM as level 1 cache (ZFS ARC) plus level 2 cache (ZFS L2ARC) Leverage the ephemeral SSD disks attached to certain EC2 instances to provide additional highspeed read cache Because data on ephemeral disks can be lost whenever an EC2 instance stops and restarts or if underlying hardware changes or fails use ephemeral disks only for read cache purposes and never as a write log Amazon EBS Performance Optimizations Because Amazon EBS is connected to an EC2 instance over the network instances with higher network bandwidth can provide more Amazon EBS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 6 performance Some instance types support the Amazon EBSoptimized flag (ec2:EbsOptimized) This flag provides a dedicated network interface for Amazon EBS bound traffic (storage I/O) and is designed to reduce variability in storage performance due to contention with network I/O The chart here provides an outline of expected bandwidth throughput and Max IOPS per instance type and size8 For SSD based volume types Amazon EBS measures an I/O operation as one that is 256 KB or smaller I/O operations larger than 256 KB are counted in 256 KB increments For example a 1024 KB I/O would count as four 256 KB IOPs Amazon EBS also combines smaller I /O operations into a single operation where possible to achieve higher performance for all volume types Benefits of Each EBS Volume Type and Relevant Storage Application Magnetic Backed Magneticbacked volume types support higher block sizes up to 1024 KB Throughput Optimized HDD (st1) and Cold HDD (sc1) Amazon EBS volume types are based on magnetic storage technology The Throughput Optimized HDD (st1) volume type is designed for sequential read/write workloads (eg big data) It can achieve very hi gh throughput (500 MB/s) for sequential read/write workloads (compared to 160 MB/s and 320 MB/s for SSDbacked gp2 and io1 respectively) Generally big data workloads operate on very large sequential datasets and generate data for storage in a similar way The st1 volume type has a baseline performance of 40 MB/s per terabyte (TB) of allocated storage and like gp2 can burst beyond the baseline performance for a short period of time The Cold HDD (sc1) volume type is designed for high density and infrequent access workloads This volume type is suitable for cold storage (infrequent access) applications where low cost is important Unlike st1 the baseline performance of an sc1 volume is 12 MB/s per TB of allocated storage It ’s important to note that Amazon S3 achieves high availability ( HA) by default within a single region whereas sc1 volumes have to be mirrored across Availability Zones to achieve parity with Amazon S3 in durability and availability of the data (This doubles and triples the cost of sc1 when compared to Amazon S3) Nevertheless depending on certain access patterns (eg cold ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 7 versus warm) of the data the cost of sc1 volumes can be cheaper for certain workloads SSD Backed General Purpose (gp2) and Provisioned IOPS (io1) SSD volumes can achieve faster IOPS performance and very high throughput on random read/write workloads when compared to magnetic disks but at a higher price point However gp2 and io1 volume types are limited to a throughput of 320 MB/s (160 MB/s for gp2 320 MB/s for io1) General Purpose (gp2) volumes provide a fixed 1:3 ratio between gigabytes and IOPS provisioned so a 100 GB General Purpose volume provides a baseline of 300 IOPS Gp2 volumes less than 1 TB in size can also burst for short periods up to 3000 IOPS You can provision General Purpose volumes up to 16 TB and 10000 IOPS Provisioned IOPS (io1) volumes are intended for workloads that demand consistent performance such as databases You can create Provisioned IOPS volumes up to 16 TB and 20000 IOPS Over a year Amazon EBS Provisioned IOPS volumes are designed to deliver within 10 percent of the Provisioned IOPS performance 999 percent of the time There are differences in total throughput capabilities between Provisioned IOPS (io1) and General Purpose SSD (gp2) volumes Io1 volumes are designed to provide up to 320 MB/second of throughput while gp2 volumes are designed to provide up to 160 MB/second RAID If you need more I/O capabilities than a single volume can provide you can create an array of volumes with redundant array of independent disks (RAID ) software to aggregate the performance capabilities of each volume in the array For example a stripe of two 4000 IOPS volumes allows for a theoretical maximum of 8000 IOPS RAID 0 and RAID 10 are the two RAID levels recommended for use with Amazon EBS RAID 0 or striping has the advantage of providing a linear performance increase with every volume added to the array (up to the maximum capabilities of the host instance) Two 4000 IOPS volumes provide 8000 IOPS three ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 8 4000 IOPS volumes provide 12000 IOPS and so on However because RAID 0 does not provide redundancy it has less durability than a single volume It also aggregates the failure rate of each volume in the array RAID 10 is a good compromise because it provides increased redundancy aggregates the read performance of all volumes in the array and provides a mirror of stripes in the array However RAID 10 isn’t without drawbacks There is a 50 percent penalty to write performance and a 50 percent reduction in available storage capacity This penalty is due to half of the disks in the array being reserved for a mirror RAID 10 has the same write penalty as RAID 1 RAID 5 and 6 are not recommended because parity calculations incur significant overhead without dramatically increasing the durability of the volume set Such a large write penalty makes these RAID levels very expensive to run in terms of both dollars and I/O In general RAID using mirroring or parity for increased durability adds extra steps and reduces performance while not necessarily increasing the data’s durability Amazon EBS has its own durability mechanisms It can be supplemented with Amazon S3backed snapshots and SoftNAS replication to more than one Availability Zone DRAM cache can dramatically increase read IOPS performance Choose instances with more memory for the best read IOPS and throughput For an even larger read cache choose instance types with ephemeral SSD locally attached disks and attach an SSD cache device to each storage pool To ensure their availability attach local SSD ephemeral disks to the SoftNAS instance when you create the instance Many instance types provide instance store or “ephemeral” volumes Although SoftNAS doesn ’t support the use of these volumes for dataset storage you can use them as a read cache for storage pools These volumes are located physically inside the underlying host of the instance and are not affected by performance variability from network overhead Although this varies by instance type most instancestore volumes (especially on newer instance types) are SSD volumes However stopping and starting an instance can move it to another underlying host which causes all data on these volumes to be lost This isn’t an issue for caching but is detrimental for dataset storage ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 9 If you require additional write caching or IOPS you can attach SSD backed Amazon EBS volumes to a storage pool The use of locally attached ephemeral disks for write cache isn ’t recommended Consider your workload requirements and priorities If the amount of storage and cost take priority over speed magnetic EBS volumes might be the right choice General Purpose SSD or Provisioned IOPS volumes offer the best mix of price performance and total storage space With AWS and SoftNAS Cloud you can add more storage or configure a different type of storage on the fly to address a variety of price or performance needs Using Amazon S3 with SoftNAS Cloud SoftNAS Cloud provides support for a feature known as SoftNAS S3 Cloud Disks These are abstractions of Amazon S3 storage presented as block devices By leveraging Amazon S3 storage SoftNAS Cloud can scale cloud storage to practically unlimited capacity You can provision each cloud disk to hold up to four petabytes (PB) of data If a larger data store is required you can use RAID to aggregate multiple cloud disks Each SoftNAS S3 Cloud Disk occupies a single Amazon S3 bucket in AWS The administrator chooses the AWS Region in which to create the S3 bucket and cloud disk For best performance choose the same r egion for both the SoftNAS Cloud EC2 instance and its S3 buckets SoftNAS Cloud storage pools and volumes using cloud disks have the full enterprisegrade NAS features (for example deduplication compression caching storage snapshots and so on) available and can be readily published for shared access through NFS CIFS AFP and iSCSI When you use a cloud disk use a block device local to the SoftNAS Cloud virtual appliance as a read cache to reduce Amazon S3 I/O charges and improve IOPS and performance for readintensive workloads For best S3 cloud disk performan ce and security specify an S3 endpoint within the VPC in which you deploy SoftNAS Cloud The S3 endpoint ensures S3 traffic is optimally routed through the VPC and not across the NAT gateway or Internet which is slower and less secure ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 10 You can also encrypt S3 cloud disks to protect all Amazon S3 I/O should it need to travel over the Internet or outside a VPC (eg from on premises or across regions ) Network Security Amazon VPC is a logically separated section of the AWS Cloud that provides you with com plete control over the networking configuration This includes the provisioning of an IP space subnet size and scope access control lists and route tables You can configure subnets inside an Amazon VPC as either public or private The difference between public and private subnets is that a public subnet has a direct route to the Internet; a private one does not When you configure an Amazon VPC for use with SoftNAS Cloud consider the level of access that your use case requires If the SoftNAS Cloud vir tual appliance does n’t need to be accessed from the Internet consider placing it in private Amazon VPC subnets To leverage SoftNAS S3 Cloud Disks the SoftNAS Cloud virtual appliance must have a way to access the S3 bucket either through the Internet or a configured VPC endpoint A VPC Security Group acts as a virtual firewall for your instance to control inbound and outbound traffic For each Security Group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic Open only those ports that are required for the operation of your application Restrict access to specific remote subnets or hosts For a SoftNAS Cloud installation determine which ports must be opened to allow access to required services These ports can be divided in to three categories: management file services and high availability Open the following ports to manage SoftNAS Cloud through the SoftNAS StorageCenter and SSH As the following table indicates you should limit the source to hosts and subnets where management clients are located ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 11 Management Type Protocol Port Source SSH TCP 22 Management HTTPS TCP 443 Management When providing file services first determine which services you will provide The following tables show which ports to open for security group configuration As the tables indicate the source should be limited to the clients and subnets that consume these services AFP Type Protoco l Port Source Custom TCP Rule TCP 548 Clients Custom TCP Rule TCP 427 Clients NFS Type Protocol Port Source Custom TCP Rule TCP 111 Clients Custom TCP Rule TCP 2010 Clients Custom TCP Rule TCP 2011 Clients Custom TCP Rule TCP 2013 Clients Custom TCP Rule TCP 2014 Clients Custom TCP Rule TCP 2049 Clients Custom UDP Rule UDP 111 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 12 Custom UDP Rule UDP 2010 Clients Custom UDP Rule UDP 2011 Clients Custom UDP Rule UDP 2013 Clients Custom UDP Rule UDP 2014 Clients Custom UDP Rule UDP 2049 Clients CIFS/SMB Type Protocol Port Source Custom TCP Rule TCP 137 Clients Custom TCP Rule TCP 138 Clients Custom TCP Rule TCP 139 Clients Custom UDP Rule UDP 137 Clients Custom UDP Rule UDP 138 Clients Custom UDP Rule UDP 139 Clients Custom TCP Rule TCP 445 Clients Custom TCP Rule TCP 135 Clients Active Directory Integration Type Protocol Port Source LDAP TCP 389 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 13 iSCSI Type Protocol Port Source Custom TCP Rule TCP 3260 Client IPs The following security group configuration is required when you deploy SoftNAS SNAP HA which is discussed later in this whitepaper As the table indicates you should limit the source to the IP addresses of the SoftNAS Cloud virtual appliance High Availability with SNAP HA™ Type Protocol Port Source Custom ICMP Rule Echo Reply 22 SoftNAS Cloud IPs or Security Group ID* Custom ICMP Rule Echo Request 443 SoftNAS Cloud IPs or Security Group ID* * http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingnetwork securityhtml Data Protection Considerations Creating a comprehensive strategy for backing up and restoring data is complex In some industries you must consider regulatory requirements for data security privacy and records retention SoftNAS Cloud provides multiple capabilities for data redundancy Always have one or more independent data backups beyond the data redundancy provided by SoftNAS Cloud You can back up data disks using EBS snapshots and thirdparty backup tools to create offsite or other backup copies to protect data SoftNAS Cloud provides multiple levels of data protection and redundancy but it isn’t intended to replace normal data backup processes Instead these levels of redundancy and data protection reduce risks associated with data loss or data ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 14 integrity and provide features that enable rapid recovery often without the need to restore from a backup copy SoftNAS Cloud is CopyOn Write (COW ) File Syst em SoftNAS Cloud leverages the reliable mature ZFS ZFS is a copy onwrite file System which means that existing data is never directly overwritten Instead new data blocks are appended to each file conceptually similar to a tape Figure 2 depicts how the file System inside SoftNAS Cloud maintains multiple versions known as storage snapshots without overwriting the existing data Figure 2: Copy onwrite file system Automatic Error Detection and Correction SoftNAS Cloud automatically detects and corrects unforeseeable data errors These errors can occur over time for many different reasons including bad sectors network or other I/O errors SoftNAS Cloud also provides protection against potential “bit rot” disk media deterioration over time caused by magnetism decay cosmic ray effects and other sporadic issues that can cause data storage or retrieval errors Proven ZFS data integrity measures are implemented by SoftNAS Cloud to detect errors repair them automatically and ensure data integrity is maintained Each read is validated against a 256bit checksum code When ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 15 errors are detected the system automatically repairs the block with the corrected data transparently so applications aren’t affected and data integrity is maintained Periodically administrators can “ scrub ” storage pools to provide even higher levels of data integrity SoftNAS Cloud Snapshots SoftNAS Cloud snapshots are volumebased point intime copies of data StorageCenter provides a rich set of snapshot scheduling and ondemand capabilities As Figure 3 shows snapshots provide file system versioning Figure 3: SoftNAS Cloud volumebased snapshots SoftNAS Cloud snapshots are integrated with Windows Previous Versions which is provided through the Microsoft Volume Shadow Copy Service (VSS ) API This feature is accessible to Windows operating system users through the Previous Versions tab so IT administrators don’t need to assist in file recovery Microsoft server and desktop operating system users can use scheduled snapshots to recover deleted files view or restore a version of a file that was overwritten and compare file versions side by side Operating systems that are supported include Windows 7 Windows 8 Windows Server 2008 and Windows Server 2012 Snapshots consume storage pool capacity so you must monitor usage for over consumption Storage snapshots grow incrementally as file system data is modified over a period of time SoftNAS Cloud automatically manages snapshots based on snapshot policies to prevent snapshots from consuming all available space Several snapshot policies are provided as a starting point and you can also create custom snapshot policies Snapshot policies are independent ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 16 of each volume so when a snapshot policy is changed it’s applied across all volumes that reference that policy When allocating storage pool space and choosing snapshot policies be sure to plan for enough additional storage to hold the snapshot data for the retention period you require SoftNAS SnapClones™ SnapClones provide read/write clones of SoftNAS Cloud snapshots They’re created instantaneously because of the spaceefficient copy onwrite model Initially SnapClones take up no capacity and grow only when writes are made to the SnapClone as shown in Figure 4 You can create any number of SnapClones from a storage snapshot Figure 4: SoftNAS SnapClones You can mount SnapClones as external NFS or CIFS shares They’re good for manipulating copies of data that are too large or complex to be practically copied For example testing new application versions against real data and selective recovery of files and folders using the native file browsers of the client operating system You can create a SnapClone instantly even for very large datasets in the tens to hundreds of TBs ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 17 Amazon EBS Snapshots SoftNAS Cloud has a builtin capability to leverage Amazon EBS point intime snapshots to back up EBS based storage pools The Amazon EBS snapshot copies the entire SoftNAS Cloud storage pool for backup and recovery purposes Advantages include the ability to use the AWS Management Console to manage the snapshots Capacity for the Amazon EBS snapshots isn’t counted against the storage pool capacity You can use Amazon EBS snapshots for longerterm data retention Deployment Scenarios The design of your SoftNAS Cloud installation on Amazon EC2 depends on the amount of usable storage and your requirements for IOPS and availability HighAvailability Architecture To realize high availability for storage infrastructure on AWS SoftNAS strongly recommends implementing SNAP HA in a highavailability configuration The SNAP HA functionality in SoftNAS Cloud provides high availability automatic and seamless failover across Availability Zones SNAP HA leverag es secure blocklevel replication provided by SoftNAS SnapReplicate to provide a secondary copy of data to a controller in another Availability Zone SNAP HA also provides both automatic and manual failover High availability and crosszone replication eliminates or minimizes downtime It is not however intended to replace regular data backups which are always required to fully protect important data especially in disaster recovery scenarios There are two methods for achieving high availability across zones: Elastic IP (EIP) addresses and SoftNAS Cloud Private Virtual IPbased HA The use of Private Virtual IPbased HA is recommended for best security performance and lowest cost All NAS traffic remains inside the VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 18 Support for EIP is available for situations that require a “routable” IP address or the rare cases where data shares must be made available over the Internet Of course access via EIP addresses can be locked down using Security Groups Figure 5: Task creation and result aggregation MultiAZ HA operates within a VPC Optionally you can route NAS traffic through a floating EIP combined with SoftNAS patent ed9 HA technology That is NFS CIFS AFP and iSCSI traffic are routed to a primary SoftNAS controller in one Availability Zone and a secondary controller operates in a different Availability Zone NAS clients can be located in any Availability Zone SnapReplicate performs block replication from the primary controller A to the backup controller B keeping the secondary updated with the latest changed data blocks once per minute In the event of a failure in Availability Zone 1 (shown in Figure 5) the Elastic HA ™ IP address automatically fails over to controller B in Availability Zone 2 in less than 30 seconds Upon failover all NFS CIFS AFP and iSCSI sessions reconnect with no impact on NAS clients (that is no stale file handles and no need to restart) ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 19 HA with Private Virtual IP Addresses The patent ed9 Virtual IPbased HA technology in SoftNAS Cloud enables you to deploy two SoftNAS Cloud instances across different Availability Zones inside the private subnet of a VPC Then you can configure the SoftNAS Cloud instances with private IP addresses which are completely isolated from the Internet This allows for more flexible deployment options and greater control over access to the appliance In addition using private IP addresses enables faster failover because waiting for an EIP to switch instances isn ’t required Further Virtual IP HA is less costly because there is no I/O flowing across an EIP Instead all traffic remains completely within the VPC For most use cases MultiAZ HA using private virtual IP addresses is the recommended method Failover usually takes place in 15 to 20 seconds from the time a failure is detected SoftNAS Cloud uses patent ed9 techniques that allow NAS clients to stay connected via NFS CIFS iSCSI and AFP in case of a failover ensuring that services are not interrupted and continue to operate without downtime ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 20 Figure 6: Crosszone HA with virtual private IP addresses For more information about implementation and HA design best practices see the SoftNAS High Availability Guide 10 Single Controller Architecture In scenarios where you don’t r equire high availability you can deploy a single controller Figure 7 shows a basic SoftNAS Cloud instance running within a VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 21 Figure 7: Basic SoftNAS Cloud instance running within a VPC In these scenarios you can combine EBS volumes into a RAID 10 ar ray for the storage pool to provide usable storage space with no drive failure redundancy You can also configure storage pools using a SoftNAS S3 Cloud Disk for RAID 0 (striping) for improved performance and IOPS These examples are for illustration purposes only Typically RAID 0 is sufficient as the underlying EBS and S3 storage devices already provide redundancy Volumes are provisioned from the storage pools and then shared through NFS CIFS/SMB AFP or iSCSI Hybrid Cloud Architecture You can deploy SoftNAS Cloud in a Hybrid Cloud architecture in which a SoftNAS Cloud virtual appliance is installed both in Amazon EC2 and on premises This architecture enables replication of data from on premises to Amazon EC2 and vice versa providing synchronized data access to users and ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 22 applications Hybrid Cloud architectures are also useful for backup and disaster recovery scenarios in which AWS can be used as an offsite backup location Replication You can deploy SoftNAS Cloud in Amazon EC2 as a replication target using SnapReplicate This enables scenarios such as data replicas disaster recovery and development environments by copying onsite production data into Amazon EC2 as shown in Figure 8 Figure 8: Hybrid Cloud backup and disaster recovery File Gateway to Amazon S3 You can deploy SoftNAS Cloud in file gateway use cases where SoftNAS Cloud operates on premises deployed in local data centers on popular hypervisors such as VMware vSphere SoftNAS Cloud connects to Amazon S3 storage treating Amazon S3 as a disk device The Amazon S3 disk device is added to a storage pool where volumes can export CIFS NFS AFP and iSCSI Amazon S3 is cached with block disk devices for read and write I/O Write I/O is cached at primary storage speeds and then flushed to Amazon S3 at the speed of the WAN When using Amazon S3based volumes with backup software the write cache dramatically shortens the backup window ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 23 Figure 9: SoftNAS Cloud Automation Options This section describes how the SoftNAS Cloud REST API CLI and AWS CloudFormation can be used for automation API and CLI SoftNAS Cloud provides a secure REST API and CLI The REST API provides access to the same storage administration capabilities from any programming language using HTTPS and REST verb commands returning JSONformatted response strings The CLI provides command line access to the API set for quick and easy storage administration Both methods are available for programmatic storage administration by DevOps teams who want to design storage into automated processes For more information see the SoftNAS API and CLI Guide 11 AWS CloudFormation The AWS CloudFormation service enables developers and businesses to create a collection of related AWS resources and provision them in an orderly and predictable way12 SoftNAS Cloud provides sample CloudFormation templates that you can use for automation You can find these templates here and in the Further Reading section of this paper When you work with CloudFormation templates pay ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 24 careful attention to the Instance Type Mappings and User Data sections which are shown in the following examples List all the instance types that you want to appear Edit this section with the latest instance types available Map to the appropriate AMIs here (SoftNAS regularly updates AMIs so this section must be updated accordingly ) This section is used to pass variables to the SoftNAS Cloud CLI for additional configuration ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 25 Conclusion SoftNAS Cloud is a popular NAS option on the AWS Cloud computing platform By following the implementation considerations and best practices highlighted in this paper you will maximize the performance durability and security of your SoftNAS Cloud implementation on AWS For more information about SoftNAS Clo ud see wwwsoftnascom Get a free 30day trial of SoftNAS Cloud now13 Contributors The following individuals and organizations contributed to this document: Eric Olson VP Development SoftNAS Kevin Brown Solutions Architect SoftNAS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 26 Brandon Chavis Solutions Architect Amazon Web Services Juan Villa Solutions Architect Amazon Web Services Ian Scofield Solutions Architect Amazon Web Services Further Reading SoftNAS References SoftNAS Cloud Installation Guide SoftNAS Reference Guide SoftNAS Cloud High Availability Guide SoftNAS Cloud API and Cloud Guide AWS CloudFormation Templates for HVM Amazon Web Services References Amazon Elastic Block Store Amazon EC2 Instances AWS Security Best Practices Amazon Virtual Private Cloud Documentation Amazon EC2 SLA 1 http://awsamazoncom/ec2/ 2 http://wwwsoftnascom/ Notes ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 27 3 http://awsamazoncom/s3/ 4 http://awsamazoncom/vpc/ 5 http://awsamazoncom/iam/ 6 http://awsamazoncom/ebs/ 7 http://awsamazoncom/cloudwatch/ 8 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l#ebsoptimizationsupport 9 US Pat Nos 9378262; 9584363 Other patents pending 10 https://wwwsoftnascom/docs/softnas/v3/snaphahtml/indexhtm 11 https://wwwsoftnascom/docs/softnas/v3/apihtml/ 12 http://awsamazoncom/cloudformation/ 13 http://softnascom/trynow?utm_source=aws&utm_medium=white paper&utm_campaign=aws wp2017
|
General
|
consultant
|
Best Practices
|
Strategies_for_Managing_Access_to_AWS_Resources_in_AWS_Marketplace
|
ArchivedStrategies for Managing Access to AWS Resources in AWS Marketplace July 201 6 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 2 of 13 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 3 of 13 Contents Abstract 3 Overview 4 Accessing ApplicationSpecific Resources 4 The EC2 Instance Role 5 Accessing Resources on Behalf of Users 7 The EC2 Instance Role 8 The Account Access Role 8 Switching Roles 11 AWS Marketplace Considerations 12 Using External IDs 12 Using Wildcards for IAM Roles 12 Great Documentation 13 Summary 13 Contributors 13 Notes 13 Abstract This paper discusses how applications in AWS Marketplace that require access to AWS resources c an use AWS Identity and Access Management (IAM) roles f or authentication to help protect customers from potential security vulnerabilities ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 4 of 13 Overview Applications in AWS Marketplace that require access to Amazon Web Services (AWS) resources must follow security best practices when accessing AWS to help protect customers from potential security vulnerabilities Typically application authors will use a combination of access and secret keys to authenticate against AWS resources However for AWS Marketplace 1 we require application authors to use AWS Identity and Access Management (IAM)2 roles and do not permit the use of access or secret keys This requirement affects two types of applications: applications that interact with AWS resources to operate and applications that interact with AWS resources on behalf of specific users either in the same or in different AWS accounts When an application requires access to AWS resources to operate temporary credentials can be obtained by using IAM roles for Amazon Elastic Compute Cloud (Amazon EC2) instances 3 Applications can then interact with AWS resources without needing to store secure and manage a user’s access keys When an application needs to access AWS resources on behalf of different users either in the same or in different AWS accounts the same technique can be applied IAM roles can be used to access both the resources required by the application and the resources the application may access on behalf of a user By using IAM roles instead of IAM users for both applicationspecific and user specific access you can remove the need for customers to distribute and manage access keys The following sections explain how you can adopt this strategy Accessing ApplicationSpecific Resources When an application needs to interact with AWS resources access should be provided by using IAM roles and not IAM users For example if an application needs to access an Amazon DynamoDB4 database and an Amazon Simple Storage Service ( S3)5 bucket access to these resources are not userspecific ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 5 of 13 Figure 1: Sample architecture for accessing applicationspecific resources The EC2 Instance Role The EC2 instance is started with an instance role attached This role has a policy that grants access to the DynamoDB database and the S3 b ucket within the same account When making API calls to Amazon S3 your application must retrieve the temporary credentials from the IAM role and use those credentials You can retrieve these credentials from the instance metadata (http://169254169254/latest/metadata/iam/security credentials/rolename) If you are using an AWS SDK the AWS Command Line Interface (AWS CLI )6 or AWS Tools for Windows PowerShell 7 these credentials will be obtained automatically Using roles in this way has several benefits Because role credentials are temporary and rotated automatically you don't have to manage credentials and you don't have to worry about longterm security risks To create and use an IAM instance role: 1 Create a new instance role 2 Add a trust relationship that allows ec2amazonawscom to assume the role ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 6 of 13 3 Create a new policy that specif ies the permissions required 4 Add the new policy to the new instance role 5 Create a new EC2 instance that specifies the IAM role 6 Build your app by using one of the AWS SDKs Do not specify credentials when calling methods because temporary credentials will be automatically added by the SDK For more detailed instructions s ee IAM Roles for Amazon EC2 in the IAM documentation8 Note You can also configure launch settings used by Auto Scaling groups to use IAM roles In our example we’ll create the instance role with the following trust relationship: { "Version": "2008 1017" "Statement": [ { "Effect": "Allow" "Principal": { "Service": "ec2amazonawscom" } "Action": "sts:AssumeRole" } ] } Add the AmazonDynamoDBFullAccess and AmazonS3FullAccess policies to the IAM role and then create the EC2 instance by specifying the role ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 7 of 13 Accessing Resources on Behalf of Users To illustrate the scenario of accessing AWS resources on behalf of specific users consider an application that processes images stored in S3 buckets on behalf of a user The application itself might use services such as DynamoDB for storin g configuration and job status The following diagram shows the architecture Figure 2: Sample architecture for accessing AWS resources on behalf of users In this scenario the EC2 instance hosting the application would use an instance profile that gives specific permissions to DynamoDB When accessing Amazon S3 resources on behalf of the user the application would switch to a different IAM role: a role that was set up by the user with specific permission to access the S3 buckets This method would allow an application to access resources on behalf of different users without the need to store credentials Users would still need to create IAM policies and IAM roles but this is no different from creating IAM users and IAM roles for the same reason There are two IAM roles in play: EC2 instance r ole (application role) – This is the role the application uses to obtain temporary credentials to access applicationspecific resources such as the Dynamo DB database Account access r oles (user roles) – These are the roles the application uses to obtain temporary credentials to access resources for specific users of the application ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 8 of 13 Figure 3: Roles and policies The EC2 Instance Role The EC2 instance role would be configured in the same way as in the first scenario The Account Access Role Since the application can also access S3 buckets and objects from other AWS accounts it is tempting to maintain a list of credentials to access these AWS resources; however the same technique of using roles and temporary credentials is preferred This strategy again removes the need for the application to store anything but benign information or handle key rotation scenarios Using roles across accounts is no more difficult to set up than creating users and assigning polici es but it requires a few extra steps: 1 In the target account (the account that contains the AWS resources): a Create a new IAM r ole b Add a trust relationship that specifies the root of the application hosting account as the principal Include a condition that specifies an external ID ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 9 of 13 c Create a new policy that spec ifies the permissions required and attach it to the role 2 In the application hosting account (the account where the application is hosted): a Create a new policy that specifies that the sts:AssumeRole action is allowed to the role defined in the target account b Attach the new policy to the instance role In the target account we can create a role named myuserrole with the following trust relationship: { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Principal": { "AWS": [ "arn:aws:iam::111111111111 :root” ] } "Action": "sts:AssumeRole" "Condition": { "StringEquals": { "sts:ExternalId": " myapp" } } } ] } Note that the account number 111111111111 is used in the principal Amazon Resource Name (ARN) to ensure that only IAM users and roles from that account can assume this role Furthermore the inclusion of an sts:ExternalId condition means that the caller also needs this information to complete the AssumeRole function See the code sample later in this paper for information on how this condition is used ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 10 of 13 The permissions added to the role permit access to specific S3 buckets It is good practice to be explicit in permissions rather than using wildcards The following is an example of the permissions added: { "Version": "2012 1017" "Statement": [ { "Effect": "Allow" "Action": [ "s3:ListBucket" ] "Resource": [ "arn:aws:s3::: myBucket1" "arn:aws:s3::: myBucket2" ] } ] } Back in the application hosting account we need to add a new permission to the role to allow it to assume the role in the target account: { "Version": "2012 1017" "Statement": { "Effect": "Allow" "Action": "sts:AssumeRole" "Resource": "arn:aws:iam:: 222222222222: role/my userrole" } } You can use a wildcard in the application hosting account since the permissions need to be explicitly defined in the target account This also allows you to access roles across multiple AWS accounts ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 11 of 13 { "Version": "2012 1017" "Statement": { "Effect": "Allow" "Action": "sts:AssumeRole" "Resource": "arn:aws:iam::* :role/my userrole" } } Switching Roles In the application we do not need to code anything special to use the instance role and the permissions that gives us However to access the S3 buckets in the other AWS accounts we will need to assume the new role and use the temporary credentials for that role in our SDK calls The following code snippet shows a Node js example: var accounted = ‘ 222222222222’; 1 var rolename = ‘ myuserrole’; 2 var externalId = ‘ myapp’; 3 var sts = new AWSSTS(); 4 var stsparams = { 5 RoleArn: 'arn:aws:iam::'+ accountid + ':role/' + rolename 6 RoleSessionName: ' myappsession ' 7 ExternalId: externalId 8 DurationSeconds: 3600 9 }; 10 11 AWSconfigcredentials = new AWS EC2MetadataCredentials(); 12 var tempCredentials = new AWSTemporaryCrede ntials(stsparams); 13 var options = { 14 credentials: tempCredentials 15 } 16 var s3 = new AWSS3(optio ns); 17 ArchivedLines 5 –10 define the parameters ( stsparams ) for obtaining the temporary credentials on line 13 We build the RoleArn from parameters defined in lines 1 and 2 along with the externalId in line 3 Once we have the temporary credentials we use these in line 17 to access the S3 resource AWS Marketplace Considerations There are a few things to consider when using IAM roles for AWS Marketplace Using External IDs It is important not to just rely on the role name; you must specify an external ID to be used by the application Furthermore you should allow the customer deploying your application to define the external ID value You should use a different external ID for each AWS account to limit exposure Using Wildcards for IAM Roles Since users will be supplying roles in different accounts you can use wildcards to designate target accounts in the application hosting account You should use a wellknown role name but you can substitute a wildcard for the account number The following example is a good use of a wildcard: arn:aws:iam::* :role/my userrole The following example is not an acceptable use of a wildcard: arn:aws:iam::* ArchivedAmazon Web Services – Managing Access to Resources in AWS Marketplace July 2016 Page 13 of 13 Great Documentation Customers need to create IAM roles and polices in the AWS accounts they want to access so you should provide explicit documentation to walk customers through creating the correct roles and policies Summary Applications in AWS Marketplace that require access to AWS resources must implement authentication using IAM r oles as discussed in this guide This helps reduce the potential vulnerabilities within a customer’s AWS account by providing access only to temporary credentials Contributors The following individuals and organizations contributed to this document: David Aiken partner solutions architect AWS Marketplace Notes 1 https://awsamazoncom/marketplace/ 2 https://awsamazoncom/iam/ 3 https://awsamazoncom/ec2/ 4 https://awsamazoncom/dynamodb/ 5 https://awsamazoncom/s3/ 6 https://awsamazoncom/cli/ 7 https://awsamazoncom/powershell/ 8 https://docsawsamazoncom/AWSEC2/latest/UserGuide/iamrolesfor amazonec2html
|
General
|
consultant
|
Best Practices
|
Strategies_for_Migrating_Oracle_Databases_to_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoawshtmlStrategies for Migrating Oracle Databases to AWS First Published December 2014 Updated January 27 202 2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iii Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iv Contents Introduction 7 Data migration strategies 7 Onestep migration 8 Twostep migration 8 Minimal downtime migration 9 Nearly continuous data replication 9 Tools used for Oracle Database migration 9 Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS 10 Amazon RDS 11 Amazon EC2 11 Data migration methods 12 Migrating data for small Oracle databases 13 Oracle SQL Developer database copy 14 Oracle materialized views 15 Oracle S QL*Loader 17 Oracle Export and Import utilities 21 Migrating data for large Oracle databases 22 Data migration using Oracle Data Pump 23 Data migration using Oracle external tables 34 Data migration using Oracle RMAN 35 Data replication using AWS Database Migration Service 37 Data replication using Oracle GoldenGate 38 Setting up Oracle GoldenGate Hub on Amazon EC2 41 Setting up the source database for use with Oracle GoldenGate 43 Setting up the destination database for use with Oracle GoldenGate 43 Working with the Extract and Replicat utilities of Oracle GoldenGate 44 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html v Running the Extract process of Oracle GoldenGate 44 Transferring files to AWS 47 AWS DataSync 47 AWS Storage Gateway 47 Amazon RDS integration with S3 48 Tsunami UDP 48 AWS Snow Family 48 Conclusion 49 Contributors 49 Further reading 49 Document versions 50 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html vi Abstract Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying enterprise grade solutions in a rapid reliable and cost effective manner Oracle Database is a widely used relational database management system that is deployed in enterprises of all sizes It manage s various forms of data in many phases of business transactions This whitepaper de scribe s the preferred methods for migrating an Oracle Database to AWS and helps you choose the method that is best for your business This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 7 Introduction This whitepaper presents best practices and methods fo r migrating Oracle Database from servers that are on premises or in your data center to AWS Data unlike application binaries cannot be recreated or reinstalled so you should carefully plan your data migr ation and base it on proven best practices AWS offers its customers the flexibility of running Oracle Database on Amazon Relational Database Service (Amazon RDS) the managed database service in the cloud as we ll as Amazon Elastic Compute Cloud (Amazon EC2): • Amazon RDS makes it simple to set up operate and scale a relational database in the cloud It provides cost efficient resizable capacity for an open standard relational database and manages common database administration tasks • Amazon EC2 provides scalable computing ca pacity in the cloud Using Amazon EC2 removes the need to invest in hardware up front so you can develop and deploy applications faster You can use Amazon EC2 to launch as many or as few virtual servers as you need configure security and networking and manage storage Running the database on Amazon EC2 is very similar to running the database on your own servers Depending on whether you choose to run your Oracle Database on Amazon EC2 or Amazon RDS the process for data migration can differ For example users don’t have OSlevel access in Amazon RDS instances It ’s important to understand the different possible strategies so you can choose the one that best fits your need s Data migration strategies The migration strategy you choose depends on several factors: • The size of the database • Network connectivity between the source server and AWS • The version and edition of your Oracle Database software • The database options tools and utilities that are available • The amount of time that is available for migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 8 • Whether the migration and switchover to AWS will be done in one step or a sequence of steps over time The following sections describe some common migration strategies Onestep migration Onestep migration is a good option for small databases tha t can be shut down for 24 to 72 hours During the shut down period all the data from the source database is extracted and the extracted data is migrated to the destination database in AWS The destination database in AWS is tested and validated for data consistency with the source Once all validations have passed the database is switched over to AWS Twostep migration Twostep migration is a commonly used method because it requires only minimal downtime and can be used for databases of any size: 1 The da ta is extracted from the source database at a point in time (preferably during nonpeak usage) and migrated while the database is still up and running Because there is no downtime at this point the migration window can be sufficiently large After you co mplete the data migration you can validate the data in the destination database for consistency with the source and test the destination database on AWS for performance connectivity to the applications and any other criteria as needed 2 Data changed in the source database after the initial data migration is propagated to the destination before switchover This step synchronizes the source and destination databases This should be scheduled for a time when the database can be shut down (usually over a few hours late at night on a weekend) During this process there won’t be any more changes to the source database because it will be unavailable to the applications Normally the amount of data that is changed after the first step is small compar ed to the total size of the database so this step will be quick and requires only minimal downtime After all the changed data is migrated you can validate the data in the destination database perform necessary tests and if all tests are passed switc h over to the database in AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 9 Minimal downtime migration Some business situations require database migration with little to no downtime This requires detailed planning and the necessary data replication tools for proper completion These migration method ologies typically involve two components: an initial bulk extract/load followed by the application of any changes that occurred during the time the bulk step took to run After the changes have applied you should validate the migrated data and conduct an y necessary testing The replication process synchronizes the destination database with the source database and continues to replicate all data changes at the source to the destination Synchronous replication can have an effect on the performance of the source database so if a few minutes of downtime for the database is acceptable then you should set up asynchronous replication instead You can switch over to the database in AWS at any time because the source and destination databases will always be in sync There are a number of tools available to help with minimal downtime migration The AWS Database Migration Service (AWS DMS) supports a range of database engines including Oracle running on premise s in EC 2 or on RDS Oracle GoldenGate is another option for real time data replication There are also third party tools available to do the replication Nearly c ontinuous data replication You can us e nearly continuous data replication if the destination database in AWS is used as a clone for reporting and business intelligence (BI) or for disaster recovery (DR) purposes In this case the process is exactly the same as minimal downtime migration ex cept that there is no switchover and the replication never stops Tools used for Oracle Database migration A number of tools and technologies are available for data migration You can use some of these tools interchangeably or you can use other third party tools or open source tools available in the market This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 10 • AWS DMS helps you move databases to and from AWS easily and securely It supports most commercial and open source databases and facilitates both homogeneous and heterogeneous migrations AWS DMS offers change data capture technology to keep databases in sync and minimize downtime during a migration It is a manag ed service with no client install required • Oracle Recovery Manager (RMAN) is a tool available from Oracle for performing and managing Oracle Database backups and rest orations RMAN allows full hot or cold backups plus incremental backups RMAN maintains a catalogue of the backups making the restoration process simple and dependable RMAN can also duplicate or clone a database from a backup or from an active database • Oracle Data Pump Export is a versatile utility for exporting and importing data and metadata from or to Oracle databases You can perform Data Pump export/ import on an entire database selective schemas table spaces or database objects Data Pump export/ import also has powerful data filtering capabilities for selective export or import of data • Oracle GoldenGate is a tool for replicating data between a source and one or more destination databases You can use it to build high availability architectures You can also use it to perform real time data integration transactional change data capture and replication in heterogeneous IT environments • Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development an d management This Java based tool is available for Microsoft Windows Linux or iOS X • Oracle SQL*Loader is a bulk data load utility available from Oracle for loading data from external files into a database SQL*Loader is included as part of the full database client installation Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS To migrate your data to AWS you need a source database (either onpremises or in a data center) and a destination database in AWS Based on your business needs you can choose between using Amazon RDS for Oracle or installing and managing the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 11 database on your own in Amazon EC2 instance To help you choose the servic e that ’s best for your business see the following sections Amazon RDS Many customers prefer Amazon RDS for Oracle because it frees them to focus on application development Amazon RDS automates time consuming database administration tasks including prov isioning backups software patching monitoring and hardware scaling Amazon RDS simplifies the task of running a database by eliminating the need to plan and provision the infrastructure as well as install configure and maintain the database software Amazon RDS for Oracle makes it easy to use replication to enhance availability and reliability for production workloads By using the Multi Availability Zone (AZ) deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a synchronously replicated secondary database As with all AWS services no upfront investments are required and you pay only for the resources you use For more information see Amazon RDS for Oracle To use Amazon RDS log in to your AWS account and start an Amazon RDS Oracle instance from the AWS Management Console A good strategy is to treat this as an interim migration database from which the final database will be created Do not enable the Multi AZ feature until the data migration is completely done because replication for Multi AZ will hinder data migration performance Be sure to give the instance enough space to store the import data files Typically this requires you to provision twice as much capacity as the size of the database Amazon EC2 Alternatively you can run an Oracle database directly on Amazon EC2 which gives you full control over se tup of the entire infrastructure and database environment This option provides a familiar approach but also requires you to set up configure manage and tune all the components such as Amazon EC2 instances networking storage volumes scalability and security as needed (based on AWS architecture best practices) For more information see the Advanced Architectures for Oracle Database on Amazon EC 2 whitepaper for guidance about the appropriate architecture to choose and for installation and configuration instructions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 12 VMware Cloud on AWS VMware Cloud on AWS is the preferred service for AWS for all vSphere based workloads VMware Cloud on AWS brings the VMware software designed data center (SDDC ) software to the AWS Cloud with op timized access to native AWS services If your Oracle workload runs on VMware on premises you can easily migrate the Oracle workloads to the AWS C loud using VMware Cloud on AWS VMware Cloud on AWS has the capability to run Oracle Real Application Clusters (RAC) workloads It allows multi cast protocols and provides shared storage capability across VMs running in VMware Cloud on AWS SDDC VMware provides native migration capabiliti es such as VMware VMotion and VMware HCX to move virtual machines ( VMs) from on premises to the VMware Cloud on AWS Depending on Orac le workload performance patterns service level agreement ( SLA) and the bandwidth availability you can choose to migrate the VM either live or using cold migration methods Data migration methods The remainder of this whitepaper provides details about ea ch method for migrating data from Oracle Database to AWS Before you get to the details you can scan the following table for a quick summary of each method Each method depends upon business recovery point objective (RPO) recovery time objective (RTO) a nd overall availability SLA Migration administrators must evaluate and map these business agreements with the appropriate methods Choose the method depending upon your application SLA RTO RPO tool and license availability Table 1 – Migration methods and tools Data migration method Database size Works for: Recommended for: AWS Database Migration Service Any size Amazon RDS Amazon EC2 Minimal downtime migration Database size limited by internet bandwidth Oracle SQL Developer Database c opy Up to 200 MB Amazon RDS Amazon EC2 Small databases with any number of objects This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 13 Data migration method Database size Works for: Recommended for: Oracle Materialized Views Up to 500 MB Amazon RDS Amazon EC2 Small databases with limited number of objects Oracle SQL*Loader Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with limited number of objects Oracle Export and Import Oracle Utilities Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with large number of objects Oracle Data Pump Up to 5 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Preferred method for any database from 10 GB to 5 TB External tables Up to 1 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Scenarios where this is the standard method in use Oracle RMAN Any size Amazon EC2 VMware Cloud on AWS Databases over 5 TB or if database backup is already in Amazon Simple Storage Service (Amazon S3) Oracle GoldenGate Any size Amazon RDS Amazon EC2 VMware Cloud on AWS Minimal downtime migration Migrating data for small Oracle databases You should base your strategy for data migration on the database size reliability and bandwidth of your network connection to AWS and the amount of time available for migration Many Oracle databases tend to be medium to large in size ranging anywhere from 10 GB to 5 TB with some as large as 20 TB or more However you also might need to migrate smaller databases This is especially true for phased migrations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 14 where the databases are broken up by schema making each migration effort small in size If the source database is under 10 GB and if you have a reli able high speed internet connection you can use one of the following methods for your data migration All the methods discussed in this section work with Amazon RDS Oracle or Oracle Database running on Amazon EC2 Note : The 10 GB size is just a guideline; you can use the same methods for larger databases as well The migration time varies based on the data size and the network throughput However if your database size exceeds 50 GB you should use one of the methods listed in the Migrating data for large Oracle databases section in this whitepaper Oracle SQL Developer database copy If the total size of the data you are migrating is under 200 MB the simplest solution is to use the Oracle SQL Developer Database Copy function Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development and management This easy touse Java based tool is available for Microsoft Windows Linux or Mac OS X With this method data transfer from a source database to a destination database is done directly without any intermediary steps Because SQL Developer can handle a large number of ob jects it can comfortably migrate small databases even if the database contains numerous objects You will need a reliable network connection between the source database and the destination database to use this method Keep in mind that this method does not encrypt data during transfer To migrate a database using the Oracle SQL Developer Database Copy function perform the following steps: 1 Install Oracle SQL Developer 2 Connect to your source and destination databases 3 From the Tools menu of Oracle SQL Developer choose the Database Copy command to copy your data to your Amazon RDS or Amazon EC2 instance 4 Follow the steps in the Database Copy Wizard You can choose the objects you want to migrate and use filters to limit the data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 15 The following screenshot shows the Database Copy Wizard The Database Copy Wizard in the Oracle SQL Developer guides you through your data transfer Oracle materialized views You can use Oracle Database materialized views to migrate data to Oracle databases on AWS for either Amazon RDS or Amazon EC2 This method is well suited for databases under 500 MB Because materialized views are available only in Oracle Database Enterprise Edition this method works only if Oracle Database Enterprise Edition is used for both the source database and the destination database With materialized view replication you can do a onetime migration of data to AWS while keeping th e destination tables continuously in sync with the source The result is a minimal downtime cut over Replication occurs over a database link between the source and destination databases For the initial load you must do a full refresh so that all the dat a in the source tables gets moved to the destination tables This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 16 Important : Because the data is transferred over a database link the source and destination databases must be able to connect to each other over SQL*Net If your network security design doesn’t a llow such a connection then you cannot use this meth od Unlike the preceding method (the Oracle SQL Developer Database Copy function) in which you copy an entire database for this method you must create a materialized view for each table that you want to migrate This gives you the flexibility of selectively moving tables to the database in AWS However it also makes the process more cumbersome if you need to migrate a large number of tables For this reason this method is better suited for migra ting a limited number of tables For best results with this method complete the following steps Assume the source database user ID is SourceUser with password PASS : 1 Create a new user in the Amazon RDS or Amazon EC2 database with sufficient privileges Create user MV_DBLink_AWSUser identified by password 2 Create a database link to the source database CREATE DATABASE LINK SourceDB_lnk CONNECT TO SourceUser IDENTIFIED BY PASS USING '(description=(address=(protocol=tcp) (host= crmdbacmecorpcom) (port=1521 )) (connect_data=(sid=ORCLCRM)))’ 3 Test the database link to make sure you can access the tables in the source database from the database in AWS through the database link Select * from tab@ SourceDB_lnk 4 Log in to the source database and create a materializ ed view log for each table that you want to migrate CREATE MATERIALIZED VIEW LOG ON customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 17 5 In the destination database in AWS create materialized views for each table for which you set up a materialized view log in the source database CREATE MATERIALIZED VIEW customer BUILD IMMEDIATE REFRESH FAST AS SELECT * FROM customer@ SourceDB_lnk Oracle SQL*Loader Oracle SQL*Loader is well suited for small to moderate databases under 10 GB that contain a limited number of objects Because the process inv olved in exporting from a source database and loading to a destination database is specific to a schema you should use this process for one schema at a time If the database contains multiple schemas you need to repeat the process for each schema This m ethod can be a good choice even if the total database size is large because you can do the import in multiple phases (one schema at a time) You can use this method for Oracle Database on either Amazon RDS or Amazon EC2 and you can choose between the fol lowing two options: Option 1 1 Extract data from the source database such as into flat files with column and row delimiters 2 Create tables in the destination database exactly like the source (use a generated script) 3 Using SQL*Loader connect to the destina tion database from the source machine and import the data Option 2 1 Extract data from the source database such as into flat files with column and row delimiters 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for SQL*Loader) For the database on Amazon EC2 this c an be the same instance where the destination database is located For Amazon RDS this is a temporary instance 4 Transport the files to the Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 18 5 Decompress and unen crypt files in the Amazon EC2 instance 6 Create tables in the destination database exactly like the source (use a generated script) 7 Using SQL*Loader connect to the destination database from the temporary Amazon EC2 instance and import the data Use the first option if your database size is small if you have direct SQL*Net access to the destination database in AWS and if data security is not a concern Otherwise use the second option because you can use encryption and compression during the transporta tion phase Compression substantially reduces the size of the files making data transportation much faster You can use either SQL*Plus or SQL Developer to perform data extraction which is the first step in both options For SQL*Plus use a query in a SQL script file and send the output directly to a text file as shown in the follo wing example: set pagesize 0 set head off set feed off set line 200 SELECT col1|| '|' ||col2|| '|' ||col3|| '|' ||col4|| '|' ||col5 from SCHEMATABLE; exit; To create encrypted and compressed output in the second option (see step 2 of the preceding Option 2 procedure) you can directly pipe the output to a zip utility You can also extract data by using Oracle SQL Developer: 1 In the Connections pane select the tables you want to extract data from 2 From the Tools menu choose the Database Export command as shown in the following screenshot This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 19 Database export command 3 On the Source/Destination page of the Export Wizard (see the next screenshot) select the Export DDL option to generate the script for creating the table which will simplify the entire process 4 In the Format dropdown on the same page choose loader 5 In the Save As box on the same page choose Separate Files This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 20 Export Wizard options on the Source/Destination page Continue to follow the Export Wizard steps to complete the export The Export Wizard helps you create the data file control file and table creation script in one step for multiple tables in a schema making it easier than using Oracle SQL*Plus to do the same tasks If you use Option 1 as specified you can run Oracle SQL*Loader from the source environment using the extracted data and control files to import data into the destination database To do this use the following command: sqlldr userid=userID/password@$service control=controlctl log=loadlo g bad=loadbad discard=loaddsc data=loaddat direct=y skip_index_maintenance=true errors=0 If you use Option 2 then you need an Amazon EC2 instance with the full Oracle client installed Additionally you need to upload the data files to that Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 21 For the database on Amazon EC2 this could be the same Amazon EC2 instance where the destination database is located For Amazon RDS this will be a temporary Amazon EC2 instance Before you do the upload we recommend that you compress and encry pt your files To do this you can use a combination of TAR and ZIP/GZIP in Linux or a third party utility such as WinZip or 7 Zip After the Amazon EC2 instance is up and running and the files are compress ed and encrypted upload the files to the Amazon EC2 instance using Secure File Transfer Protocol (SFTP) From the Amazon EC2 instance connect to the destination database using Oracle SQL*Plus to ensure you can establish the connection Run the sqlldr command shown in the preceding example for each control file that you have from the extract You can also cre ate a shell/bat script that will run sqlldr for all control files one after the other Note : Enabling skip_index_maintenance=true significantly increase s dataload performance However table indexes are not updated so you will need to rebuild all indexes after the data load is complete Oracle Export and Import utilities Despite being replaced by Oracle Data Pump the original Oracle Export and Import utilities are useful for migrations of databases with si zes less than 10 GB where the data lacks binary float and double data types The import process creates the schema objects so you do not need to run a script to create them beforehand This makes the process well suited for databases with a large number o f small tables You can use this method for Amazon RDS for Oracle and Oracle Database on Amazon EC2 The first step is to export the tables from the source database by using the following command Substitute the user name and password as appropriate: exp userID/password@$service FILE=exp_filedmp LOG=exp_filelog The export process creates a binary dump file that contains both the schema and data for the specified tables You can import the schema and data into a destination database Choose one of the foll owing two options for the next steps: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 22 Option 1 1 Export data from the source database into a binary dump file using exp 2 Import the data into the destination database by running imp directly from the source server Option 2 1 Export data from the source database into a binary dump file using exp 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for the emp/imp utility) For the database on Amazon EC2 this could be the same instance where the destination database is located For Amazon RDS this will be a temporary instance 4 Transport the files to the Amazon EC2 instance 5 Decompress and unencrypt the files in the Amazon EC2 instance 6 Import the data into the destination database by running imp If your database size is larger than a gigabyte use Option 2 because it includes compression and encryption This method will also have better import performance For both Option 1 and Option 2 use the following command to import into the destination d atabase: imp userID/password@$service FROMUSER=cust_schema TOUSER=cust_schema FILE=exp_filedmp LOG=imp_filelog There are many optional arguments that can be passed to the exp and imp commands based on your needs For details see the Oracle documentation Migrating data for large Oracle databases For larger databases use one of the methods described in this section rather than one of the methods described in Migrating Data for small Oracle Databases For the purpose of this whitepaper define a large database as any database 10 GB or more This section describes three methods for migrating large databases: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 23 • Data m igration using Oracle Data Pump – Oracle Data Pump is an excellent tool for migrating large amounts of data and it can be used with databases on either Amazon RDS or Amazon EC2 • Data m igration using Oracle external tables – The process involved in data migration using Oracle external tables is very similar to that of Oracle Data Pump Use this method if you already have processes built around it; otherwise it is better to use the Oracle Data Pump method • Data m igration using Oracle RMAN – Migration using RMAN can be useful if you are already backing up the database to AWS or using the AWS Import/Export service to bring the data to AWS Oracle RMAN can be used only for databases on Amazon EC2 not Amazon RDS Data migration using Oracle Da ta Pump When the size of the data to be migrated exceeds 10 GB Oracle Data Pump is probably the best tool to use for migrating data to AWS This method allows flexible data extraction options a high degree of parallelism and scalable operations which enables highspeed movement of data and metadata from one database to another Oracle Data Pump is introduced with Oracle 10 g as a replacement for the original Import/Export tools It is available only on Oracle Database 10 g Release 1 or later You can use the Oracle Data Pump method for both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The process involved is similar for both although Amazon RDS for Oracle requires a few additional steps Unlike the original Import/Export utilities the Oracle Data Pump import requires the data files to be available in the database server instance to import them into the database You cannot access the file system in the Amazon RDS instance directly so you need to use one or more Amazon EC2 instances (bridge instances) to transfer files from the source to the Amazon RDS instance and then import that into the Amazon RDS database You need these temporary Amazon EC2 bridge instances only for the duration of the import; you can end the instance s soon after the import is done Use Amazon Linux based instances for this purpose You do not need an Oracle Database installation for an Amazon EC2 bridge instance; you only need to install the Oracle Instance Client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 24 Note : To use this method your Amazo n RDS database must be version 11203 or later The f ollowing is the overall process for data migration using Oracle Data Pump for Oracle Database on Oracle for Amazon EC2 and Amazon RDS Migrating data to a database in Amazon EC2 1 Use Oracle Data Pump to export data from the source database as multiple compressed and encrypted files 2 Use Tsunami UDP to move the files to an Amazon EC2 instance running the destination Oracle database in AWS 3 Import that data into the destination database using the Oracle Data Pump import feature Migrating data to a database in Amazon RDS 1 Use Oracle Data Pump to export data from the source database as multiple files 2 Use Tsunami UDP to move the files to Amazon EC2 bridge instances in AWS 3 Using the provided Perl script that makes use of the UTL_FILE package move the data files to the Amazon RDS instance 4 Import the data into the Amazon RDS database using a PL/SQL script that utilizes the DBMS_DATAPUMP package (an example is provided at the end of this section) Using Oracle Data Pump to export data on the source instance When you export data from a large database you should run multiple threads in parallel and specify a size for each file This speeds up the export and also makes files available quickly for the next step of the process There is no need to wait for the entire database to be exported before moving to the next step As each file completes it can be moved to the next step You can enable compre ssion by using the parameter COMPRESSION=ALL which substantially reduces the size of the extract files You can encrypt files by providing a password or by using an Oracle wallet and specifying the parameter ENCRYPTION= all To learn more about the compr ession and encryption options see the Oracle Data Pump documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 25 The following example shows the export of a 500 GB database running eight threads in parallel with each output file up to a maximum of 20 GB This creates 22 files totaling 175 GB The total file size is significantly smaller than the actual source database size because of the compression option of Oracle Data Pump: expdp demoreinv/demo f ull=y dumpfile=data_pump_exp1:reinvexp1%Udmp data_pump_exp2:reinvexp2%Udmp data_pump_exp3:reinvexp3%Udmp filesize=20G parallel=8 logfile=data_pump_exp1:reinvexpdplog compression=all ENCRYPTION= all ENCRYPTION_PASSWORD=encryption_password job_name=r eInvExp Using Oracle Data Pump to export data from the source database instance Spreading the output files across different disks enhances input/output ( I/O) performance In the following examples three different disks are used to avoid I/O contention This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 26 Parallel run in multiple threads writing to three different disks Dump files generated in each disk This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 27 The most time consuming part of this entire process is the file transportation to AWS so optimizing the file transport significantly reduces the time required for the data migration The following steps show how to optimize the file transport: 1 Compress the dump files during the export 2 Serialize th e file transport in parallel Serialization here means sending the files one after the other; you don’t need to wait for the export to finish before uploading the files to AWS Uploading many of these files in parallel (if enough bandwidth is available) fu rther improves the performance We recommend that you parallel upload as many files as there are disks being used and use the same number of Amazon EC2 bridge instances to receive those files in AWS 3 Use Tsunami UDP or a commercial wide area network ( WAN ) accelerator to upload the data files to the Amazon EC2 instances Using Tsunami to upload files to Amazon EC2 The following example shows how to install Tsunami on both the source database server and the Amazon EC2 instance: yum y install make yum y install automake yum y install gcc yum y install autoconf yum y install cvs wget http://sourceforgenet/projects/tsunami udp/files/late st/download?_test=goal tar xzf tsunami*gz cd tsunamiudp* /recompilesh make install After you’ve installed Tsunami open port 46224 to enable Tsunami communication On the source database server start a Tsunami server as shown in the following example If you do parallel upload then you need to start multiple Tsunami servers: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 28 cd/mnt/expdisk1 tsunamid * On the destination Amazon EC2 instances start a Tsunami server as shown in the following example If you do multiple parallel f ile uploads then you need to start a Tsunami server on each Amazon EC2 bridge instance If you do not use parallel file uploads and if the migration is to an Oracle database on Amazon EC2 (not Amazon RDS) then you can avoid the Amazon EC2 bridge instanc e Instead you can upload the files directly to the Amazon EC2 instance where the database is running If the destination database is Amazon RDS for Oracle then the bridge instances are necessary because a Tsunami server cannot be run on the Amazon RDS s erver: cd /mnt/data_files tsunami tsunami> connect sourcedbserver tsunami> get * From this point forward the process differs for a database on Amazon EC2 versus a database on Amazon RDS The following sections show the processes for each service Next steps for a database on an Amazon EC2 instance If you used one or more Amazon EC2 bridge instances in the preceding steps then bring all the dump files from the Amazon EC2 bridge instances into the Amazon EC2 database instance The easiest w ay to do this is to detach the Amazon Elastic Block Store (Amazon EBS) volumes that contain the files from the Amazon EC2 bridge instances and connect them to the Amazon EC2 database instance Once all the dump files are available in the Amazon EC2 databa se instance use the Oracle Data Pump import feature to get the data into the destination Oracle database on Amazon EC2 as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 29 impdp demoreinv/demo full=y DIRECTORY=DPUMP_DIR dumpfile= reinvexp1%Udmpreinvexp2%Udmp reinvexp3%Udmp parallel=8 logfile=DPimplog ENCRYPTION_PASSWORD=encryption_password job_name=DPImp This imports all data into the database Check the log file to make sure everything went well and validate the data to confirm that all the data was migrated as expected Next steps for a database on Amazon RDS Because Amazon RDS is a managed service the Amazon RDS instance does not provide access to the file system However an Oracle RDS instance has an externally accessible Oracle directory object named DATA_PUMP_DIR You can copy Oracle Data Pump dump files to this directory by using an Oracle UTL_FILE package Amazon RDS supports S3 integration as well You could transfer files between the S3 bucket and Amazon RDS instance through S3 integration of RDS The S3 integration option is recommended when you want to transfer moderately large files to the RDS instance dba_directories Alternatively you can use a Perl script to move the files from the bridge instances to the DATA_PUMP_DIR of the Amazon RDS instance Preparing a bridge Instance To prepare a bridge instance make sure that Perl DBI and Oracle DBD modules are installed so that Perl can connect to the database You can use the following commands to verify if the modules are installed: $perl e 'use DBI; print $DBI::VERSION" \n";' $perl e 'use DBD::Oracle; print $DBD::Oracle::VERSION" \n";' If the modules are not already installed use the following process below to install them before proceeding further: 1 Downloa d Oracle Database Instant Client from the Oracle website and unzip it into ORACLE_HOME 2 Set up the environment variable as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 30 $ export ORACLE_BASE=$HOME/oracle $ export ORACLE_HOME=$ORACLE_BASE/instantclient_11_2 $ export PATH=$ORACLE_HOME:$PATH $ export TNS_ADMIN=$HOME/etc $ export LD_LIBRARY_PATH=$ORACLE_HOME:$LD_LIBRARY_PATH 3 Download and unzip DBD::Oracle as shown in the following example: $ wget http://wwwcpanor g/authors/id/P/PY/PYTHIAN/DBD Oracle 174targz $ tar xzf DBDOracle174targz $ $ cd DBDOracle174 4 Install DBD::Oracle as shown in the following example: $ mkdir $ORACLE_HOME/log $ perl MakefilePL $ make $ make install Transferring files to an Amazon RDS instance To transfer files to an Amazon RDS instance you need an Amazon RDS instance with at least twice as much storage as the actual database because it needs to have space for the database and the Oracle Data Pump d ump files After the import is successfully completed you can delete the dump files so that space can be utilized It might be a better approach to use an Amazon RDS instance solely for data migration Once the data is fully imported take a snapshot of RDS DB Create a new Amazon RDS instance using the snapshot and then decommission the data migration instance Use a single Availability Zone instance for data migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 31 The following example shows a basic Perl script to transfer files to an Amazon RDS instance Make changes as necessary Because this script runs in a single thread it uses only a small portion of the network bandwidth You can run multiple instances of the script in parallel for a quicker file transfer to the Amazon RDS insta nce but make sure to load only one file per process so that there won’t be any overwriting and data corruption If you have used multiple bridge instances you can run this script from all of the bridge instances in parallel thereby expediting file trans fer into the Amazon RDS instance: # RDS instance info my $RDS_PORT=4080; my $RDS_HOST="myrdshostxxxus east1devords devamazonawscom"; my $RDS_LOGIN="orauser/orapwd"; my $RDS_SID="myoradb"; my $dirname = "DATA_PUMP_DIR"; my $fname= $ARGV[0]; my $data = ‘‘dummy’’; my $chunk = 8192; my $sql_open = "BEGIN perl_globalfh := utl_filefopen(:dirname :fname 'wb' :chunk); END;"; my $sql_write = "BEGIN utl_fileput_raw(perl_globalfh :data true); END;"; my $sql_close = "BEGIN utl_filefclos e(perl_globalfh); END;"; my $sql_global = "create or replace package perl_global as fh utl_filefile_type; end;"; my $conn = DBI >connect('dbi:Oracle:host='$RDS_HOST';sid='$RDS_SID';por t='$RDS_PORT$RDS_LOGIN '') || die ( $DBI::errstr " \n") ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 32 my $updated=$conn >do($sql_global); my $stmt = $conn >prepare ($sql_open); $stmt>bind_param_inout(":dirname" \$dirname 12); $stmt>bind_param_inout(":fname" \$fname 12); $stmt>bind_param_inout(":chunk" \$chunk 4); $stmt>execute() || die ( $DBI::errstr " \n"); open (INF $fname) || die " \nCan't open $fname for reading: $!\n"; binmode(INF); $stmt = $conn >prepare ($sql_write); my %attrib = ('ora_type’’24’); my $val=1; while ($val > 0) { $val = read (INF $data $chunk); $stmt>bind_param(":data" $data \%attrib); $stmt>execute() || die ( $DBI::errstr " \n"); }; die "Problem copying: $! \n" if $!; close INF || die "Can't close $fname: $! \n"; $stmt = $co nn>prepare ($sql_close); $stmt>execute() || die ( $DBI::errstr " \n"); You can check the list of files in the DBMS_DATAPUMP directory using the following query: SELECT * from table(RDSADMINRDS_FILE_UTILLISTDIR('DATA_PUMP_DIR')); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 33 Once all files are s uccessfully transferred to the Amazon RDS instance connect to the Amazon RDS database as a database administrator (DBA) user and submit a job by using a PL/SQL script that uses DBMS_DATAPUMP to import the files into the database as shown in the following PL/SQL script Make any changes as necessary: Declare h1 NUMBER; begin h1 := dbms_datapumpopen (operation => 'IMPORT' job_mode => 'FULL' job_name => 'REINVIMP' version => 'COMPATIBLE'); dbms_datapumpset_parallel(handle => h1 degree => 8); dbms_datapumpadd_file(handle => h1 filename => 'IMPORTLOG' directory => 'DATA_PUMP_DIR' filetype => 3); dbms_datapumpset_parameter(handle => h1 name => 'KEEP_MASTER' value => 0); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp1%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp2%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp3%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_data pumpset_parameter(handle => h1 name => 'INCLUDE_METADATA' value => 1); dbms_datapumpset_parameter(handle => h1 name => 'DATA_ACCESS_METHOD' value => 'AUTOMATIC'); dbms_datapumpset_parameter(handle => h1 name => 'REUSE_DATAFILES' value => 0 ); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 34 dbms_datapumpset_parameter(handle => h1 name => 'SKIP_UNUSABLE_INDEXES' value => 0); dbms_datapumpstart_job(handle => h1 skip_current => 0 abort_step => 0); dbms_datapumpdetach(handle => h1); end; / Once the job is complete check the Amazon RDS database to make sure all the data has been successfully imported At this point you can delete all the dump files using UTL_FILEFREMOVE to reclaim disk space Data migration using Oracle external tables Oracle external tables are a feature of Oracle Database that allows you to query data in a flat file as if the file were an Oracle table The process for using Oracle external tables for data migration to AWS is almost exactly the same as the one used for Ora cle Data Pump The Oracle Data Pump based method is better for large database migrations The external tables method is useful if your current process uses this method and you don’t want to switch to the Oracle Data Pump based method Following are the mai n steps: 1 Move the external table files to RDS DATA_PUMP_DIR 2 Create external tables using the files loaded 3 Import data from the external tables to the database tables Depending on the size of the data file you can choose to either write the file directly to RDS DATA_PUMP_DIR from an on premises server or use an Amazon EC2 bridge instance as in the case of the Data Pump based method If the file size is large and you choose to use a bridge instance use compression and encryption on the files as well as Tsunami UDP or a WAN accelerator exactly as described for the Data Pump based migration To learn more about Oracle external tables see External Tables Concepts in the Oracle documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 35 Data migration using Oracle RMAN If you are planning to migrat e the entire database and your destination database is self managed on Amazon EC2 you can use Oracle RMAN to migrate data Data migration by using Oracle Data Pump is faster and more flexible than data migration using Oracle RMAN; however Oracle RMAN is a better option for the following cases: • You already have an RMAN backup available in Amazon S3 that can be used If you are looking for options to migrate RMAN backups to S3 consider AWS Storage Gateway or AWS DataSync services • The database is very large (greater than 5 TB) and you are planning to use AWS Import/Export • You need to m ake numerous incremental data changes before switching over to the database on AWS Note : This method is for Amazon EC2 and VMware Cloud on AWS You cannot use this method if your destination database is Amazon RDS To migrate data using Oracle RMAN: 1 Create a full backup of the source database using RMAN 2 Encrypt and compress the files 3 Transport files to AWS using the most optimal method 4 Restore the RMAN backup to the destination database 5 Capture incremental backups from the source and apply them to the destination database until switchover can be performed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 36 Creating a full backup of the source database Using RMAN Create a backup of the source database using RMAN: $ rman target=/ RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> BACKUP DATABASE PLUS ARCHIVELOG If you have a license for the compression and encryption option then you already have the RMAN backups created as encrypted and compressed files Otherwise after the backup files are created encrypt and compress them using tools such as ZIP 7 Zip or GZIP All subsequent actions occur on the server running the destination database Transporting files to AWS Depending on the size of the database and the time available for migration you can choose the most optimal method for file transportation to AWS For small files consider AWS DataSync For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to upload files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB consider using AWS Storage Gateway or AWS Snow Family devices for offline file transfer Migrating data to Oracle Database on AWS There are two ways to migrate data to a destination database You can create a new database and restore from the RMAN backup or you can create a duplicate database from the RMAN bac kup Creating a duplicate database is easier to perform To create a duplicate database move the transported files to a location accessible to the Oracle Database instance on Amazon EC2 Start the target instance in NOMOUNT mode Now use RMAN to connect to the destination database For this example we are not connecting to the source database or the RMAN catalog so use the following command : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 37 $ rman AUXILIARY / DUPLICATE TARGET DATABASE TO DBONEC2 SPFILE NOFILENAMECHECK; The duration of this process varies based on the size of the database and the type of Amazon EC2 instance For better performance use Amazon Elastic Block Store (Amazon EBS) General Purpose ( SSD) volumes for the RMAN backup files For more information about SSD volume types see Introducing the Amazon EBS General Purpose (SSD) volume type Once the process is finished RMAN produces a completion message and you now have your duplicate instance After verification you can delete the Amazon EBS volumes containing the RMAN backup files We recommend that you take a snapshot of the volumes for later use before deleting them if needed Data replication using AWS Database Migration Service AWS Database Migration Service (AWS DMS) can support a number of migration and replication strategies including a bulk upload at a point in time a minimal downtime migration levera ging Change Data Capture (CDC) or migration of only a subset of the data AWS DMS supports sources and targets in EC2 RDS and on premise s Because no client install is required the following steps are the same for any combination of the above AWS DMS also offers the ability to migrate data between databases as easily as from Oracle to Oracle The following steps show how to migrate data between Oracle databases using AWS DMS and with minimal downtime: 1 Ensure supplemental logging is enabled on the sour ce database 2 Create the target database and ensure database backups and MultiAZ are turned off if the target is on RDS 3 Perform a no data export of the schema using Oracle SQL Developer or the tool of your choice then apply the schema to the target database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 38 4 Disable triggers foreign keys and secondary indexes (optional) on the target 5 Create a DMS replication instance 6 Specify the source and target endpoints 7 Create a “Migrate existing data and replicate ongoing changes” task mapping your source tables to your target tables (The default task includes all tables ) 8 Start the task 9 After the full load portion of the tasks is complete and the transactions reach a steady state enable triggers foreign keys and secondary indexes 10 Turn on backups and MultiAZ 11 Turn off any applications using the original source database 12 Let the final transactions flow through 13 Point any applications at the new database in AWS and start An alternative method is to use Oracle Data Pump for the initial load and DMS to replicate changes from the Oracle System Change Number ( SCN ) point where data dump stopped More details on using AWS DMS can be found in the documentation To improve the performance of DMS replication the schemas and tables can be grouped into multiple DMS tasks DMS tasks support wildcard entries for the names of the schemas and tables Data replication using Oracle GoldenGate Oracle GoldenGate is a tool for real time change data capture and replication Oracle GoldenGate creates trail files that contain the most recently changed data from the source database then pushes these files to the destination database You can use Oracle GoldenGate to perform minimal downtime data migration Oracle GoldenGate is a licensed software from Oracle You can also use it for nearly continuous da ta replication You can use Oracle GoldenGate with both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The following steps show how to migrate data using Oracle GoldenGate: 1 The Oracle GoldenGate Extract process extracts all the existing data for the first load Extract Pump and Replicat process refers to the GoldenGate Integrated capture mode This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 39 2 The Oracle GoldenGate Pump process transports the extracted data to the Replicat process running in Amazon EC2 3 The Replicat process appl ies the data to the destination database 4 After the first load the process runs continually to capture changed data and applies it to the destination database GoldenGate Replicat is a key part of the entire system You can run it from a server in the sou rce environment but AWS recommend s that you run the Replicat process in an Amazon EC2 instance within AWS for better performance This Amazon EC2 instance is referred to as a GoldenGate Hub You can have multiple GoldenGate Hubs especially if you are mig rating data from one source to multiple destinations Oracle GoldenGate replication data flow process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 40 Reference architecture for EC2: Oracle GoldenGate replication from onpremis es to Oracle Database on Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 41 Reference architecture for RDS: Oracle GoldenGate replication from onpremises to RDS Oracle Database on AWS Setting up Oracle GoldenGate Hub on Amazon EC2 To create an Oracle GoldenGate Hub on Amazon EC2 create an Amazon EC2 instance with a full client installation of Oracle DBMS 12c version 12203 and Oracle GoldenGate 12314 Additionally apply Oracle patch 13328193 For more information about instal ling GoldenGate see the Oracle GoldenGate documentation This GoldenGate Hub stores and processes all the data from your source database so make sure that there is enough storage available in this instance to store the trail files It is a good practice to choose the largest instance type that your GoldenGate license allows Use appropriate Amazon EBS storage volume types depending on the database change rate and replication performance The following process sets up a GoldenGate Hub on an Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 42 1 Add the following entry to the tnsnameora file to create an alias For more information about the tnsnameora file see the Oracle GoldenGate documentation $ cat /example/config/tnsnamesora TEST= (DESCRIPTION= (ENABLE=BROKEN) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=ec2 dns)(PORT=8200)) ) ( CONNECT_DATA= (SID=ORCL) ) ) 2 Next create subdirectories in the GoldenGate directory by using the Amazon EC2 command line shell and ggsci the GoldenGate command interpreter The subdirectories are created under the gg directory and include directories for parameter report and check point files: prompt$ cd /gg prompt$ /ggsci GGSCI> CREATE SUBDIRS 3 Create a GLOBALS parameter file using the Amazon EC2 command line shell Parameters that affect all GoldenGate processes are defined in the GLOBALS parameter file The following example creates the necessary file: prompt$ cd $GGHOME prompt$ vi GLOBALS CheckpointTable oggadm1oggchkpt This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 43 4 Configure the manager Add the following lines to the GLOBALS file and then start the manager by using ggsci : PORT 8199 PurgeOldExtracts /dirdat/* UseCheckpoints MINKEEPDAYS When you have completed this process the GoldenGate Hub is ready for use Next you set up the source and destination databases Setting up the source database for use with Oracle GoldenGate To replicate data to the destination database in AWS you need to se t up a source database for GoldenGate Use the following procedure to set up the source database This process is the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Set the compatible parameter to the same as your destination database (for Amazon RDS as the destination) 2 Enable supplemental logging and force logging 3 Verify the database is in archivelog mode 4 Set ENABLE_GOLDENGATE_REPLICATION parameter to TRUE 5 Set the retention period for archived redo logs for the GoldenGate source database 6 Create a GoldenGate user account on the source database Setting up the destination database for use with Oracle GoldenGate The following steps must be performed on the target database for GoldenGate replication to work These steps are the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Create a GoldenGate user account on the destination database 2 Grant the necessary privileges that are listed in the following example to the GoldenGate user: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 44 CREATE SESSION ALTER SESSION CREATE CLUST ER CREATE INDEXTYPE CREATE OPERATOR CREATE PROCEDURE CREATE SEQUENCE CREATE TABLE CREATE TRIGGER CREATE TYPE SELECT ANY DICTIONARY CREATE ANY TABLE ALTER ANY TABLE LOCK ANY TABLE SELECT ANY TABLE INSERT ANY TABLE UPDATE ANY TABLE DELETE ANY TA BLE Working with the Extract and Replicat utilities of Oracle GoldenGate The Oracle GoldenGate Extract and Replicat utilities work together to keep the source and destination databases synchronized by means of incremental transaction replication using trail files All changes that occur on the source database are automatically detected by Extract and then formatted and transferred to trail files on the GoldenGate Hub on premises or on the Amazon EC2 instance After the initial load is completed the Replicat process reads the data from these files and replicates the data to the destination database nearly continuously Running the Extract process of Oracle GoldenGate The Extract process of Oracle GoldenGate retrieves converts and outputs data from the source database to trail files Extract queues transaction details to memory or to temporary disk storage When the transaction is committed to the source database Extract flushes all of the transaction details to a trail file for routing to the GoldenGate Hub on premises or on the Amazon EC2 instance and then to the destination database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 45 The following process enables and starts the Extract process 1 First configure the Extract parameter file on the GoldenGate Hub The following example shows an Extract parameter file: EXTRACT EABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UT F8) USERID oggadm1@TEST PASSWORD XXXXXX EXTTRAIL /path/to/goldengate/dirdat/ab IGNOREREPLICATES GETAPPLOPS TRANLOGOPTIONS EXCLUDEUSER OGGADM1 TABLE EXAMPLETABLE; 2 On the GoldenGate Hub launch the GoldenGate command line interface (ggsci ) Log in to the source database The following example shows the format for logging in: dblogin userid <user>@<db tnsname> 3 Next add a checkpoint table for the database: add checkpointtable Add transdata to turn on supplemental logging for the database table: add trandata <user><table> • Alternatively you can add transdata to turn on supplemental logging for all tables in the database: add trandata <user>* 4 Using the ggsci command line use the following commands to enable the Extract process: add extract <extract name> tranlog INTEGRATED tranlog begin now This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 46 add exttrail <pathtotrailfromthe paramfile> extract <extractname fromparamfile> MEGABYTES Xm 5 Register the Extract process with the database so that the archive logs are not deleted This lets you recover old uncommitted transactions if necessary To register the Extract process with the database use the following command: register EXTRACT <extract process name> DATABASE 6 To start the Extract process use the following command: start <extract process name> Running the Replicat process of Oracle GoldenGate The Replicat process of Oracle GoldenGate is used to push transaction information in the trail files to the destination database The following process enables and starts the Replicat pro cess 1 First configure the Replicat parameter file on the GoldenGate Hub (on premises or on an Amazon EC2 instance) The following listing shows an example Replicat parameter file: REPLICAT RABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UTF8) USERID oggadm1@TARGET password XXXXXX ASSUMETARGETDEFS MAP EXAMPLETABLE TARGET EXAMPLETABLE; 2 Launch the Oracle GoldenGate command line interface ( ggsci ) Log in to the destination database The following example shows the format for logging in: dblogin userid <user>@<db tnsname> This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 47 3 Using the ggsci command line add a checkpoint table Note that user indicates the Oracle GoldenGate user account not the owner of the destination table schema The following example creates a checkpoint table named gg_checkpoint : add checkpointtable <user>gg_checkpoint 4 To enable the Replicat process use the following command: add replicat <replicat name> EXTTRAIL <extract trail file> CHECKPOINTTABLE <user>gg_checkpoint 5 To start the Replicat process use the following command: start <replicat name> Transferring files to AWS Migrating databases to AWS require s the transfer of files to AWS There are multiple methods of transferring files to AWS This section describe s the methods you can adopt during the migrat ion process AWS DataSync AWS DataSync is an online data transfer service that can accelerate moving data between an onpremises storage system and AWS storage services such as S3 EFS or FSx for Windows File Server AWS DataSync agent connects to the on premises storage and copies data and metadata securely to AWS AWS DataSync is the recommended option when you have large volume of small files 100 MB or less AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software applianc e with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective st orage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low latency performance by maintaining This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 48 frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 or Amazon S3 Glacier AWS Storage Gateway works with moderate or large file sizes AWS Storage Gateway S3 File Gateway interface provides a Network File System/Server Messag e Block (NFS/SMB ) file share in your on premises environment They run a local VM in your on premises data center Files can be copied at the on premises location to this local file share These files are copied to the designated S3 bucket in AWS If your workload uses Windows OS you can use Amazon FSx File Gateway to copy files fr om on premises via SMB clients to the Amazon FSx for Windows File Server Amazon RDS integration with S3 You can use S3 integration to transfer files between an Amazon S3 bucket and an Amazon RDS instance The Amazon RDS instance accesses S3 bucket via a defined IAM role so you can have granular bucket or object level policies for the Amazon RDS instance S3 integration is useful when you have to use Oracle utilities like utl_file or datapump Amazon RDS Oracle rdsadmin package supports both upload and download from S3 buckets Tsunami UDP Tsunami UDP is an open source file transfer protocol that uses TCP control and UDP data for transfer over long dista nce networks at a very fast rate When you use UDP for transfer you gain more throughput than is possible with TCP over the same networks You can download Tsunami UDP from the Tsunami UDP Prot ocol page at SourceForgenet1 For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to Upload Files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB using AWS Snow Family devices might be a better option For smaller databases you can also use the Amazon S3 multipart upload capability to keep it simple and efficient AWS Snow Family AWS Snow Family offers a number of physical devices and capacity points transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 49 computing capabilities For example AWS Snowball Edge has 80 TB of us able capacity and can be mounted as an NFS mount point in the onpremises location For smaller capacity AWS Snowcone offers 8 TB of storage and has the capability to run the AWS DataSync agent Conclusion This whitepaper described the preferred methods for migrating Oracle Database to AWS for both Amazon EC2 and Amazon RDS Depending on your business needs and your migration strategy you will probably use a combination of methods to migrate your database For best performance during migration it is critical to choose the appropriate level of resources on AWS especially for Amazon EC2 instances and Amazon EBS General Purpose (SSD) volume types Contributors Contributors to this document include : • Jayaraman Vellore Sampathkumar AWS Solution Architect – Database Amazon Web Services • Praveen Katari AWS Partner Solution Architect Amazon Web Services Further reading For additional information on data migration with AWS services consult the following resources: Oracle Database on AWS: • Advanced Architectures for Oracle Database on Amazon EC2 • Choosing the Operating System for Oracle Workloads on Amazon EC2 • Determining the IOPS Needs for Oracle Database on AWS • Best Practic es for Running Oracle Database on AWS • AWS Case Study: Amazoncom Oracle DB Backup to Amazon S3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 50 Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle AWS Database Migration Service ( AWS DMS) • AWS Database Mig ration Service Oracle licensing on AWS • Licensing Oracle Software in the Cloud Computing Environment AWS service details • Cloud Products • AWS Documentation Index • AWS Whitepapers & Guides AWS pricing information • AWS Pricing • AWS Pricing Calculator VMware Cloud on AWS • VMware Cloud on AWS Document version s Date Description January 27 2022 Update to text on page 30 for clarity October 8 2021 General updates and inclusion of AWS Snowcone and AWS DataSync services for migration August 2018 General updates December 2014 First publication
|
General
|
consultant
|
Best Practices
|
Streaming_Data_Solutions_on_AWS_with_Amazon_Kinesis
|
Streaming Data Solutions on AWS First Published September 13 2017 Updated September 1 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Real time and near realtime application scenarios 1 Difference between batch and stream processing 2 Stream processing challenges 2 Streaming data solutions: examples 2 Scenario 1: Internet offering based on location 3 Processing streams of data with AWS Lambda 5 Summary 6 Scenar io 2: Near realtime data for security teams 6 Amazon Kinesis Data Firehose 7 Summary 12 Scenario 3: Preparing clic kstream data for data insights processes 13 AWS Glue and AWS Glue streaming 14 Amazon DynamoDB 15 Amazon SageMaker and Amazon SageMaker service endpoints 16 Inferring data insights in real time 16 Summary 17 Scenario 4: Device sensors realtime anomaly detection and notifications 17 Amazon Kinesis Data Analytics 19 Summary 21 Scenario 5: Real time tele metry data monitoring with Apache Kafka 22 Amazon Managed Streaming for Apache Kafka (Amazon MSK) 23 Amazon EMR with Spark Streaming 25 Summary 27 Conclusion 28 Contributors 28 Document versions 28 Abstract Data engineers data analysts and big data developers are looking to process and analyze their data in realtime so their companies can learn about what their customers applications and products are doing right now and react promptly This whitepaper describes how services such as Amazon Kinesis Data St reams Amazon Kinesis Data Firehose Amazon EMR Amazon Kinesis Data Analytics Amazon Managed Streaming for Apache Kafka (Amazon MSK) and other services can be used to implement real time applications and provides common design patterns using these services Amazon Web Services Streaming Data Solutions on AWS 1 Introduction Businesses today receive data at massive scale and speed due to the explosive growth of data sources that continuously generate streams of data Whether it is log data from application servers clickstream data from websites and mobile apps o r telemetry data from Internet of Things (IoT) devices it all contains information that can help you learn about what your customers applications and products are doing right now Having the ability to process and analyze this data in real time is esse ntial to do things such as continuously monitor your applications to ensure high service uptime and personalize promotional offers and product recommendations Real time and near real time processing can also make other common use cases such as website an alytics and machine learning more accurate and actionable by making data available to these applications in seconds or m inutes instead of hours or days Real time and nearrealtime application scenarios You can use streaming data services for real time and near realtime applications such as application monitoring fraud detection and live leaderboards Realtime use cases require millisecond end toend latencies – from ingestion to processing all the way to emitting the results to target data stores a nd other systems For example Netflix uses Amazon Kinesis Data Streams to monitor the communications between all its applications so it can detect and fix issues quickly ensuring high service u ptime and availability to its customers While the most commonly applicable use case is application performance monitoring there are an increasing number of real time applications in ad tech gaming and IoT that fall under this category Common nearrealtime use cases include analytics on data stores for data science and machine learning (ML) You can use streaming data solutions to continuously load real time data into your data lakes You can then update ML models more frequently as new data becomes av ailable ensuring accuracy and reliability of the outputs For example Zillow uses Kinesis Data Streams to collect public record data and multiple listing service ( MLS) listings and then provide home buyers and sellers with the most up to date home value estimates in near realtime ZipRecruiter uses Amazon MSK for their event logging pipelines which are critical infrastructu re components that collect store and continually process over six billion events per day from the ZipRecruiter employment marketplace Amazon Web Services Streaming Data Solutions on AWS 2 Difference between batch and stream processing You need a different set of tools to collect prepare and process real time streaming data than those tools that you have traditionally used for batch analytics With traditional analytics you gather the data load it periodically into a database and an alyze it hours days or weeks later Analyzing real time data requires a different approach Stream processing applications process data continuously in real time even before it is stored Streaming data can come in at a blistering pace and data volumes can vary up and down at any time Stream data processing platforms have to be able to handle the speed and variability of incoming data and process it as it arrives often millions to hundreds of millions of events per hour Stream processing challenges Processing real time data as it arrives can enable you to make decisions much faster than is possible with traditional data analytics technologies However building and operating your own custom streaming data pipelines is complicated and resource intensiv e: • You have to build a system that can cost effectively collect prepare and transmit data coming simultaneously from thousands of data sources • You need to fine tune the storage and compute resources so that data is batched and transmitted efficiently for maximum throughput and low latency • You have to deploy and manage a fleet of servers to scale the system so you can handle the varying speeds of data you are going to throw at it Version upgrade is a complex and costly process After you have built this platform you have to monitor the system and recover from any server or network failures by catching up on data processing from the appropriate point in the stream without creating duplicate data You also need a dedicated team for infrastructure man agement All of this takes valuable time and money and at the end of the day most companies just never get there and must settle for the status quo and operate their business with information that is hours or days old Streaming data solutions : examples To better understand how organizations are doing real time data processing using AWS services this whitepaper uses four examples Each example review s a scenario and Amazon Web Services Streaming Data Solutions on AWS 3 discuss es in detail how AWS realtime data streaming services are used to solve the problem Scenario 1: Internet offering based on location Company InternetProvider provides internet services with a variety of bandwidth options to users across the world When a user signs up for internet company InternetProvider provides the user with different bandwidth options based on user’s geographic location Given these requirements company InternetProvider implemented an Amazon Kinesis Data Stream s to consume user details and location The user details and location are enrich ed with different bandwidth options prior to publishing back to the application AWS Lambda enables this real time enrichment Processing streams of data with AWS Lambda Amazon Kinesis Data Streams Amazon Kinesis Data Streams enables you to build custom real time applications using popular stream processing frameworks and load streaming data into many different data stores A Kinesis stream can be configured to continuously receive events from hundreds of thousands of data producers delivered from sources like website click streams IoT sensors social media feeds and application logs Within milliseconds data is available to be read and processed by your application When implementing a solution with Kinesis Data Streams you create custom data processing applications known as Kinesis Data Streams applications A typical Kinesis Data Streams application reads data from a Kinesis stream as data reco rds Data put into Kinesis Data Streams is ensured to be highly available and elastic and is available in milliseconds You can continuously add various types of data such as clickstreams application logs and social media to a Kinesis stream from hundreds of thousands of sources Within seconds the data will be available for your Kinesis Applications to read and process from the stream Amazon Web Services Streaming Data Solutions on AWS 4 Amazon Kinesis Data Stre ams is a fully managed streaming data service It manages the infrastructure storage networking and configuration needed to stream your data at the level of your data throughput Sending data to Amazon Kinesis Data Streams There are several ways to s end data to Kinesis Data S treams providing flexibility in the designs of your solutions • You can write code utilizing one of the AWS SDKs that are supported by multiple popular languages • You can use the Amazon Kinesis Agent a tool for sending data to Kinesis Data Streams The Amazon Kinesis Producer L ibrary (KPL) simplifies the producer application development by enabling developers to achieve high write throughput to on e or more Kinesis data streams The KPL is an easy to use highly configurable library that you install on your hosts It acts as an intermediary between your producer application code and the Kinesis Streams API actions For more information about the KPL and its ability to produce events synchronously and asynchronously with code examples see Writing to your Kinesis Data Stream Using the KPL There are two different operations in the Kinesis Streams API that add data to a stream: PutRecords and PutRecord The PutRecords operation sends multiple records to your stream per HTTP request while PutRecord submits one record per HTTP request To achieve higher throughput for most applications use PutRecords For more information about these APIs see Adding Data to a Stream The details for each API operation can be found in the Amazon Kinesis Streams API Reference Processing data in Amazon Kinesis Data Streams To read and process data from Kinesis streams you need to create a consumer application There are varied ways to create consumers for Kinesis Data Streams Some of these approaches include using Amazon Kinesis Data Analytics to analyze streaming data using KCL using AWS Lambda AWS Glue streaming ETL jobs and using the Kinesis Data Streams API d irectly Consumer applications for Kinesis Streams can be developed using the KCL which helps you consume and process data from Kinesis Streams The KCL takes care of Amazon Web Services Streaming Data Solutions on AWS 5 many of the complex tasks associated with distributed computing such as load balancing across multiple instances responding to instance failures checkpointing processed records and reacting to resharding The KCL enables you to focus on the writing record processing logic For more information on how to build your own KCL application see Using the Kinesis Client Library You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream AWS Lambda periodically polls the stream (once p er second) for new records and when it detects new records it invokes the Lambda function passing the new records as parameters The Lambda function is only run when new records are detected You can map a Lambda function to a shared throughput consumer ( standard iterator) You can build a consumer that use s a feature called enhanced fan out when you require dedicated throughput that you do not want to contend with other consumers that are receiving data from the stream This feature enables consumers to receive records from a stream with throughput of up to two MB of data per second per shard For most cases using Kinesis Data Analytics KCL AWS Glue or AWS Lambda shou ld be used to process data from a stream However if you prefer you can create a consumer application from scratch using the Kinesis Data Streams API The Kinesis Data Streams API provides the GetShardIterator and GetRecords methods to retrieve data from a stream In this pull model you r code extracts data directly from the shards of the stream For more information about writing your own consumer application using the API see Developing Custom Consumers with Shared Throughput Using the AWS SDK for Java Details about the API can be found in the Amazon Kinesis Streams API Reference Processing streams of data with AWS Lambda AWS Lambda enables you to run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automatically trigger from other AWS ser vices or call it directly from any web or mobile app AWS Lambda integrates natively with Amazon Kinesis Data Streams The polling checkpointing and error handling complexities are abstracted when you use this native Amazon Web Services Streaming Data Solutions on AWS 6 integration This allows the Lambda function code to focus on business logic processing You can map a Lambda function to a shared throughput (standard iterator) or to a dedicated throughput consumer with enhanced fan out With a standard iterator Lambda polls each shard in your Kinesis s tream for records using HTTP protocol To minimize latency and maximize read throughput you can create a data stream consumer with enhanced fan out Stream consumers in this architecture get a dedicated connection to each shard without competing with othe r applications reading from the same stream Amazon Kinesis Data Streams pushes records to Lambda over HTTP/2 By default AWS Lambda invoke s your function as soon as records are available in the stream To buffer the records for batch scenarios you can i mplement a batch window for up to five minutes at the event source If your function returns an error Lambda retries the batch until processing succeeds or the data expires Summary Company InternetProvider leveraged Amazon Kinesis Data Stream s to stream user details and location The stream of record was consumed by AWS Lambda to enrich the data with bandwidth options stored in the function’s library After the enrichment AWS Lambda published the bandwidth options back to the application Amaz on Kinesis Data Stream s and AWS Lambda handled provisioning and management of servers enabling Company InternetProvider to focus more on business application development Scenario 2: Near realtime data for security teams Company ABC2Badge provides sensor s and badges for corporate or large scale events such as AWS re:Invent Users sign up for the event and receive unique badges that the sensors pick up across the campus As users pass by a sensor their anony mized information is recorded into a relational database In an upcoming event due to the high volume of attendees ABC2Badge has been requested by the event security team to gather data for the most concentrated areas of the campus every 15 minutes Thi s will give the security team enough time to react and disperse security personal proportionally to concentrated areas Given this new requirement from the security team and the inexperience of building a streaming Amazon Web Services Streaming Data Solutions on AWS 7 solution to process date in near realtime ABC2Badge is looking for a simple yet scalable and reliable solution Their current data warehouse solution is Amazon Redshift While reviewing the features of the Amazon Kinesis services they recognize d that Amazon Kinesis Data Firehose can receive a stream of data records batch the records based on buffer size and/or time interval and insert them into Amazon Redshift They created a Kinesis Data Firehose delivery stream and configured it so it would copy data to their Amazon Redshift tables every five minutes As part of this new solution they used the Amazon Kinesis Agent on their servers Every five minutes Kinesis Data Firehose load s data into Amazon Redshift where the business intelligence ( BI) team is enabled to perform its analysis and send the data to the security team every 15 minutes New solution using Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS It can capture transform and load streaming data into Amazon Kinesis Data Analytics Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elasticsearch Service (Amazon ES) and Splunk Additionally Kinesis Data Firehose can load streaming data into any custom HTTP endpoint or HTTP endpoints owned by supported thirdparty service providers Kinesis Data Firehose enables near realtime analytics with existing business intelligence tools and dashboards that you’re already using today It’s a fully managed serverless service that automatically scales to match the throughput of your data and requires no ongoing administration Kinesis Data Firehose can batch compress and Amazon Web Services Streaming Data Solutions on AWS 8 encrypt the data before loading minimizing the amount of storage used at the destination and increasing security It can also transform the source data using AWS Lambda and deliver the transformed data to destin ations You configure your data producers to send data to K inesis Data Firehose which automatically delivers the data to the destination that you specify Sending data to a Firehose delivery stream To send data to your delivery stream there are several o ptions AWS offers SDKs for many popular programming languages each of which provides APIs for Amazon Kinesis Data Firehose AWS has a utility to help send data to your delivery stream Kinesis Data Firehose has been integrated with other AWS services to send data directly from those services into your delivery stream Using Amazon Kinesis agent Amazon Kinesis agent is a standalone software application that continuously monitors a set of log files for new data to be sent to the delivery stream The agent automat ically handles file rotation checkpointing retries upon failures and emits Amazon CloudWatch metrics for monitoring and troubleshooting of the deliv ery stream Additional configurations such data pre processing monitoring multiple file directories and writing to multiple delivery streams can be applied to the agent The agent can be installed on Linux or Window sbased servers such as web servers log servers and database servers Once the agent is installed you simply specify the log files it will monitor and the delivery stream it will send to The agent will durably and reliably send new data to the delivery stream Using API with AWS SDK and AWS services as a source The Kinesis Data Firehose API offers two operations for sending data to your delivery stream PutRecord sends one data record within one call PutRecordBatch can send multiple data records within one call and can achieve higher t hroughput per producer In each method you must specify the name of the delivery stream and the data record or array of data records when using this method For more information and sample code for the Kinesis Data Firehose API operations see Writing to a Firehose Delivery Stream Using the AWS SDK Kinesis Data Firehose also runs with Kinesis Data Streams CloudW atch Logs CloudW atch Events Amazon Simple Notification Service (Amazon SNS) Amazon API Amazon Web Services Streaming Data Solutions on AWS 9 Gateway and AWS IoT You can scalably and reliably sen d your streams of data logs events and IoT data directly into a K inesis Data Firehose destinati on Process ing data before delivery to destination In some scenarios you might want to transform or enhance your streaming data before it is delivered to its destination For example data producers might send unstructured text in each data record and yo u need to transform it to JSON before delivering it to Amazon ES Or you might want to convert the JSON data into a columnar file format such as Apach e Parquet or Apache ORC before storing the data in Amazon S3 Kinesis Data Firehose has built in data format conversion capability With this you can easily convert your streams of JSON data into Apache Parquet or Apache ORC file formats Data transformation flow To enable streaming data transformations Kinesis Data Firehose uses a Lambda function that you create to transform your data Kinesis Data Firehose buffers incoming data to a specified buffer size for the function and then invokes the specified Lambda function asynchronously The transformed data is sent from Lambda to K inesis Data Firehose and Kinesis Data Firehose delivers the data to the destination Data format conversion You can also enable K inesis Data Firehose data format conversion which will convert your stream of JSON data to Apache Parquet or Apache ORC This feature can only convert JSON to Apache Parquet or Apache ORC If you have data that is in CSV you can transform that data via a Lambda function to JSON and then apply th e data format conversion Data delivery As a near realtime delivery s tream Kinesis Data Firehose buffers incoming data After your delivery stream’s buffering thresholds have been reached your data is delivered to the destination you’ve configured Ther e are some differences in how K inesis Data Firehose delivers data to each destination which this paper reviews in the following sections Amazon Web Services Streaming Data Solutions on AWS 10 Amazon S3 Amazon S3 is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web It’s designed to deliver 99999999999% durability and scale past trillions of object s worldwide Data delivery to Amazon S3 For data delivery to S3 K inesis Data Firehose concatenates multiple incoming records based on the buffering configuration of your delivery stream and then delivers them to Amazon S3 as an S3 object The freq uency of data delivery to S3 is determined by the S3 buffer size (1 MB to 128 MB) or buffer i nterval (60 seconds to 900 seconds) which ever comes first Data delivery to your S3 bucket might fail for various reasons For example the bucket might not exist anymore or the AWS Identity and Access Managem ent (IAM) role that Kinesis Data Firehose assumes might not have access to the bucket Under these conditions K inesis Data Firehose keeps retrying for up to 24 hours until the delivery succeeds The maximum data storage time of K inesis Data Firehose is 24 hours If data delivery fails for more than 24 hours your data is lost Amazon Redshift Amazon Redshift is a fast fully manag ed data warehouse that makes it simple and costeffective to analyze all your data using standard SQL and your existing BI tools It allows you to run complex analytic queries against petabytes of structured data using sophisticated query optimization col umnar storage on high performance local disks and massively parallel query running Data delivery to Amazon Redshift For data delivery to Amazon Redshift K inesis Data Firehose first delivers incoming data to your S3 bucket in the format described earlier K inesis Data Firehose then issues an Amazon Redshift COPY command to load the data from your S3 bucket to your Amazon Redshift cluster The frequency of data COPY operations from S3 to Amazon Redshift is determined by how fast your Amazon Redshift clust er can finish the COPY command For a n Amazon Redshift destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream to handle data delivery fai lures Kinesis Data Firehose retries for the specified time duration and skips that particular batch of S3 objects if unsuccessful The Amazon Web Services Streaming Data Solutions on AWS 11 skipped objects' information is delivered to your S3 bucket as a manifest file in the errors/ folder which you can use for manual backfill Following is an architec ture diagram of Kinesis Data Firehose to Amazon Redshift data flow Although this data flow is unique to Amazon Redshift Kinesis Data Firehose follows similar patterns for the other destination targets Data flow from Kinesis Data Firehose to Amazon Redshift Amazon E lasticsearch Service (Amazon ES) Amazon ES is a fully managed service that delivers the Elasticsearch easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads Amazon ES makes it easy to deploy operate and scale Elasticsea rch for log analytics full text search and application monitoring Data delivery to Amazon E S For data delivery to Amazon E S Kinesis Data Firehose buffers incoming records based on the buffering configuration of your delivery stream and then generates an Elasticsearch bulk request to index multiple records to your Elasticsearch cluster The frequency of data delivery to Amazon E S is determined by the Elasticsearch buffer size (1 MB to 100 MB) and buffer interval (60 seconds to 900 seconds) values whic hever comes first For the Amazon E S destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream Kinesis Data Firehose retries for the specified time duration and then skips that particular index request The skipped d ocuments are delivered to your S3 bucket in the elasticsearch_failed/ folder which you can use for manual backfill Amazon Kinesis Data Firehose can rotate your Amazon ES index based on a time duration Depending on the rotation option you choose (NoRotation OneHour Amazon Web Services Streaming Da ta Solutions on AWS 12 OneDay OneWeek or OneMonth ) Kinesis Data Firehose appends a portion of the Coordinated Universal Time ( UTC) arrival timestamp to your specified index name Custom HTTP endpoint or supported thirdparty service provider Kinesis Data Firehose can send data either to Custom HTTP endpoints or supported thirdparty providers such as Datadog Dynatrace LogicMonitor MongoDB New Relic Splunk and Sumo Logic Data delivery to custom HTTP endpoints For K inesis Data Firehose to successfully deliver data to custom HTTP endpoints these endpoints must accept requests and send responses using certain K inesis Data Firehose request and response formats When delivering data to an HTTP endpoint owned by a supported third party ser vice provider you can use the integrated AWS Lambda service to create a function to transform the incoming record(s) to the format that matches the format the service provider's integration is expecting For data delivery frequency each service provider has a recommended buffer size Work with your service provider for more information on their recommended buffer size For data delivery failure handling Kinesis Data Firehose establishes a connection with the HTTP endpoint first by waiting for a response from the destination Kinesis Data Firehose continues to establish connection until the retry duration expires After that Kinesis Data Firehose considers it a data delivery failure and backs up the data to your S3 bucket Summary Kinesis Data Firehose can persist ently deliver your streaming data to a supported destination It’s a fully managed solution requiring little or no development For Company ABC2Badge using K inesis Data Firehose was a natural choice They were already using Amazon Redshift as their data warehouse solution Because their data sources continuously wr ote to transaction logs they were able to leverage the Amazon Kinesis Agent to stream that data without writing any additional code Now that company ABC2Badge has created a stream of sensor records and are receiving these records via K inesis Data Firehose they can use this as the basis for the security team use case Amazon Web Services Streaming Data Solutions on AWS 13 Scenario 3: Preparing clickstream data for data insights processes Fast Sneakers is a fashion boutique with a focus on trendy sneakers The price of any given pair of shoes can go up or down depending on inventory and trends such as what celebrity or sports star was spotted wearing brand name sneakers on TV last night It is importan t for Fast Sneakers to track and analyze those trends to maximize their revenue Fast Sneakers does not want to introduce additional overhead into the project with new infrastructure to maintain They want to be able to split the development to the appropr iate parties where the data engineers can focus on data transformation and their data scientists can work on their ML functionality independently To react quickly and automatically adjust prices according to demand Fast Sneakers streams significant eve nts (like click interest and purchasing data) transforming and augmenting the event data and feeding it to a ML model Their ML model is able to determine if a price adjustment is required This allows Fast Sneakers to automatically modify their pricing t o maximize profit on their products Fast Sneakers realtime price adjustments This architecture diagram shows the real time streaming solution Fast Sneakers created utilizing Kinesis Data Streams AWS Glue and DynamoDB Streams By taking advantage of these services they have a solution that is elastic and reliable without Amazon Web Services Streaming Data Solutions on AWS 14 needing to spend time on setting up and maintaining the supporting infrastructure They can spend their time on what brings value to their company by focusing on a streaming extract transform load (ETL) job and their machine learning model To better understand the architecture and technologies that are used in their workload the following are some details of the services used AWS Glue and AWS Glue streaming AWS Glue is a fully managed ETL service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly reduce the cost complexity and t ime spent creating ETL jobs AWS Glue is serverless so there is no infrastructure to set up or manage You pay only for the resources consumed while your jobs are running Utilizing AWS Glue you can create a consumer application with a n AWS Glue streaming ETL job This enables you to utilize Apache Spark and other Spark based modules writing to consume and process your event data The next section of this document goes into more depth about this scenario AWS Glue Data Catalog The AWS Glue Data Catalog contains references to data that is used as sources and targets of your ETL jobs in AWS G lue The AWS Glue Data Catalog is an index to the location schema and runtime metrics of your data You can use information in the Data Catalog to create and monitor your ETL jobs Information in the Data Catalog is stored as metadata tables where each table specifies a single data store By setting up a crawler you can automatically assess numerous types of data stores including DynamoDB S3 and Java Database Connectivity ( JDBC ) connected stores extract metadata and schemas and then create table de finitions in the AWS Glue Data Catalog To work with Amazon Kinesis Data Streams in AWS Glue streaming ETL jobs it is best practice to define you r stream in a table in a n AWS Glue Data Catalog database You define a stream sourced table with the Kinesis s tream one of the many formats supported (CSV JSON ORC Parquet Avro or a customer format with Grok) You can manually enter a schema or you can leave this step to your AWS Glue job to determine during runtime of the job Amazon Web Services Streaming Data Solutions on AWS 15 AWS Glue streaming ETL job AWS Glue runs your ETL jobs in an Apache Spark serverless environment AWS Glue runs these jobs on virtual resources that it provisions and manages in its own service account In addition to being able to run Apache Spark based jobs AWS Glue provides an additiona l level of functionality on top of Spark with DynamicFrames DynamicFrames are distributed tables that support nested data such as structu res and arrays Each record is self describing designed for schema flexibility with semi structured data A record in a DynamicFrame contains both data and the schema describing the data Both Apache Spark DataFrames and DynamicFrames are supported in you r ETL scripts and you can convert them back and forth DynamicFrames provide a set of advanced transformations for data cleaning and ETL By using Spark Streaming in your AWS Glue Job you can create streaming ETL jobs that run continuously and consume d ata from streaming sources like Amazon Kinesis Data Streams Apache Kafka and Amazon MSK The jobs can clean merge and transform the data then load the results into stores including Amazon S3 Amazon DynamoDB or JDBC data stores AWS Glue processes an d writes out data in 100 second windows by default This allows data to be processed efficiently and permits aggregations to be performed on data arriving later than expected You can configure the window size by adjusting it to accommodate the speed in response vs the accuracy of your aggregation AWS Glue streaming jobs use checkpoints to track the data that has been read from the Kinesis Data Stream For a walkthrough on creating a streaming ETL job in AWS Glue you can refer to Adding Streaming ETL Jobs in AWS Glue Amazon DynamoDB Amazon DynamoDB is a key value and document database that delivers single digit millisecond pe rformance at any scale It's a fully managed multi Region multi active durable database with built in security backup and restore and in memory caching for internet scale applications DynamoDB can handle more than ten trillion requests per day and c an support peaks of more than 20 million requests per second Change data capture for DynamoDB streams A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table When you enable a stream on a table DynamoDB captures information abo ut every modification to data items in the table DynamoDB runs on Amazon Web Services Streaming Data Solut ions on AWS 16 AWS Lambda so that you can create triggers —pieces of code that automatically respond to events in DynamoDB streams With triggers you can build applications that react to data modification s in DynamoDB tables When a stream is enabled on a table you can associate the stream Amazon Resource Name (ARN) with a Lambda function that you write Immediately after an item in the table is modified a new record appears in the table's stream AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records Amazon SageMaker and Amazon SageMaker service endpoints Amazon SageMaker is a fully managed platform that enables developers and data scientists with the ability to build train and deploy ML models quickly and at any scale SageMaker includes modules that can be u sed together or independently to build train and deploy your ML models With Amazon SageMaker service e ndpoints you can create managed hosted endpoint for real time inference with a deployed model that you developed within or outside of Amazon SageMaker By utilizing the AWS SDK you can invoke a SageMaker endpoint passing content type information along with content and then receive real time predictions based on the data passed Th is enables you to keep the design and development of your ML models separated from your code that performs actions on the inferred results This enables your data scientists to focus on ML and the developers who are using the ML model to focus on how the y use it in their code For more information on how to invoke an endpoint in SageMaker see InvokeEnpoint in the Amazon SageMaker API Reference Infer ring data insights in real time The previous architecture diagram shows that Fast Sneakers’ existing web application added a Kinesis Data Stream containing click stream events which provides traffic and event data from the website The product catalog which contains information such as categorization product attributes and pricing and the order table which has data such as items ordered billing shipping and so on are s eparate DynamoDB tables The data stream source and the appropriate DynamoDB tables have their metadata and schemas defined in the AWS Glue Data Catalog to be used by the AWS Glue streaming ETL job Amazon Web Services Streaming Data Solutions on AWS 17 By utilizing Apache Spark Spark streaming and DynamicFr ames in their AWS Glue streaming ETL job Fast Sneakers is able to extract data from either data stream and transform it merging data from the product and order tables With the hydrated data from the transformation the datasets to get inference results from are submitted to a DynamoDB table The DynamoDB Stream for the table triggers a Lambda function for each new record written The Lambda function submits the previously transformed records to a SageMaker Endpoint with the AWS SDK to infer what if any price adjustments are necessary for a product If the ML model identifies an adjustment to the price is required the Lambda function write s the price change to the product in the catalog DynamoDB table Summary Amazon Kinesis Data Streams makes it easy to collect process and analyze real time streaming data so you can get timely insights and react quickly to new information Combined with the AWS Glue serverless data integration service you can create real time event streaming application s that prepare and combine data for ML Because both Kinesis Data Streams and AWS Glue services are fully managed AWS takes away the undifferentiated heavy lifting of managing infrastructure for your big data platform lettin g you focus on generating data insights based on your data Fast Sneakers can utilize real time event processing and ML to enable their website to make fully automated real time price adjustments to maximize their product stock This brings the most valu e to their business while avoiding the need to create and maintain a big data platform Scenario 4: Device sensors realtime anomaly detection and notifications Company ABC4Logistics transports highly flammable petroleum products such as gasoline liquid propane ( LPG) and naphtha from the port to various cities There are hundreds of vehicles which have multiple sensors installed on them for monitoring things such as location engine temperature temperature inside the container driving speed parking location road conditions and so on One of the requirements ABC4Logistics has is to monitor the temperatures of the engine and the container in realtime and alert the driver and the fleet monitoring team in case of any anomaly To Amazon Web Services Streaming Data Solutions on AWS 18 detect such conditions and generate alerts in real time ABC4Logistics implemented the following architecture on AWS ABC4Logistics ’s device sensors real time anomaly detection and notifications architectu re Data from device sensors is ingested by AWS IoT Gateway where the AWS IoT rules engine will make the streaming data available in Amazon Kinesis Data Streams Using Amazon Kinesis Data Analytics ABC4Logistics can perform the real time analytics on streaming data in Kinesis Data Streams Using Kinesis Data Analytics ABC4Logistics can detect if temperature readings from the sensors deviate from the normal readings over a period of ten seconds and ingest the record onto another Kinesis Data Streams instance identifying the anomalous records Amazon Kinesis Data Streams then invokes AWS Lambda functions which can send the alerts to the driver and the fleet monitoring team through Amazon SNS Data in Kinesis Data Stream s is also pushed down to Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose persist s this data in Amazon S3 allowing ABC4Logistics to perform batch or near real time analytics on senso r data ABC4Logistics uses Amazon Athena to query data in S3 and Amazon QuickSight for visualizations For longterm data retention the S3 Lifecycle policy is used to archive data to Amazon S3 Glacier Important components of this architecture are detail ed next Amazon Web Services Streaming Data Solutions on AWS 19 Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics enables you to transform and analyze streaming data and respond to anomalies in real time It is a se rverless service on AWS which means Kinesis Data Analytics takes care of provisioning and elastically sca les the infrastructure to handle any data throughput T his takes away all the undifferentiated heavy lifting of setting up and managing the streaming infrastructure and enables you to spend more time on writing steaming applications With Amazon Kinesis Data Analytics you can interactively query streaming da ta using multiple options including S tandard SQL Apache Flink applications in Java Python and Scala and build Apache Beam applications using Java to analyze data streams These options provide you with flexibility of using a specific approach depending on the complexity level of streaming application and source/target support The following section discuss es Kinesis Data Analytics for Flink Applications option Amazon Kinesis Data Analytics for Apache Flink applications Apache Flink is a popular open source framework and distributed processing engine for stateful computations over unbounded and bounded da ta streams Apache Flink is designed to perform computations at in memory speed and at scale with support for exactly one semantics Apache Flink based applications help achieve low latency with high throughput in a fault tolerant manner With Amazon Kinesis Data Analytics for Apache Flink you can author and run code against streaming sources to perform time series analytics feed real time dashboards and create real time metrics without managing the complex distributed Apache Flink environment You can use the high level Flink programming features in the same way that you use them when hosting the Flink infrastructure yourself Kinesis Data Analytics for Apache Flink enables you to create applications in Java Scala Python or SQL to process and analy ze streaming data A typical Flink application reads the data from the input stream or data location or source transform s/filter s or joins data using operator s or function s and store s the data on output stream or data location or sink The following architecture diagram shows some of the supported sources and sinks for the Kinesis Data Analytics Flink application In addition to the pre bundled connectors for source/sink you can also bring in custom connectors to a variety of other source/sinks for Flink Applications on Kinesis Data Analytics Amazon Web Services Streaming Data Solutions on AWS 20 Apache Flink application on Kinesis Data Analytics for real time stream processing Developers can use their preferred IDE to develop Flink applications and deploy them on Kinesis Data Analytics from AWS Management Console or DevOps tools Amazon Kinesis Data Analytics Studio As part of Kinesis Data An alytics service Kinesis Data Analytics Studio is available for customers to interactively query data streams in real time and easily build and run stream processing applications using SQL Python and Scala Studio notebooks are powered by Apache Zeppelin Using Studio notebook you have the ability to develop your Flink Application code in a notebook environment view results of your code in real time and visualize it within your notebook You can create a Studio Notebook powered by Apache Zeppelin and Apache Flink with a single click from Kinesis Data Streams and Amazon MSK console or launch it from Kinesis Data Analytics Console Once you develop the code iteratively as p art of the Kinesis Data Analytics Studio y ou can deploy a notebook as a Kinesis data analytics application to run in streaming mode continuously reading data from your sources writing to your destinations maintaining longrunning application state an d scaling automatically based on the throughput of your source streams Earlier customers used Kinesis Data Analytics for SQL Applications for such interactive analytics of real time streaming data on AWS Amazon Web Services Streaming Data Solutions on AWS 21 Kinesis Data Analytics for SQL applications is still available but for new projects AWS recommend s that you use the new Kinesis Data Analytics Studio Kinesis Data Analytics Studio combines ease of use with advanced analytical capabilities which makes it possible to build sophisticated stream processing applic ations in minutes For making the Kinesis Data Analytics Flink application faulttolerant you can make use of checkpointing and snapshots as described in the Implemen ting Fault Tolerance in Kinesis Data Analytics for Apache Flink Kinesis Data Analytics Flink application s are useful for writing complex streaming analytics applications such as applications with exactly one semantics of data processing checkpoint ing capabilities and processing data from data sources such as Kinesis Data Streams Kinesis Data Firehose Amazon MSK Rabbit MQ and Apache Cassandra including Custom Connectors After processing streaming data in the Flink application you can persist data to various sinks or destinations such as Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon DynamoDB Amazon Elasticsearch Service Amazon Timestream Amazon S3 and so on The Kinesis Data Analytics Flink application also provide s sub second performance guarantees Apache Beam applications for Kinesis Data Analytics Apache Beam is a programming model for processing streaming data Apache Beam provides a portable API layer for building sophistica ted data parallel processing pipelines that may be run across a diversity of engines or runners such as Flink Spark Streaming Apache Samza and so on You can use the Apache Beam framework with your Kinesis data analytics application to process streaming data Kinesis data analytics applications that use Apache Beam use Apache Flink runner to run Beam pipelines Summary By making use of the AWS st reaming service s Amazon Kinesis Data Streams Amazon Kinesis Data Analytics and Amazon Kinesis Data Firehose ABC4Logistics : can detect anomalous patterns in temperature readings and notify the driver and the fleet management team in real time preventing major accidents such as complete vehicle breakdown or fire Amazon Web Services Streaming Data Solutions on AWS 22 Scenario 5: Real time telemetry data monitoring with Apache Kafka ABC1Cabs is an online cab booking services company All the cabs have IoT devices that gather telemetry data from the vehicles C urrently ABC1Cabs is running Apache Kafka clusters that are designed for real time event consumption gathering system health metrics activity tracking and feeding the data into Apache Spark Streaming platform b uilt on a Hadoop cluster on premises ABC1Cabs use Kibana dashboards for business metrics debugging alerting and creat ing other dashboards They are interested in Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards Their requ irement is to reduce admin overhead of maintaining Apache Kafka and Hadoop clusters while using familiar open source software and APIs to orchestrate their data pipeline The following architecture diagram shows their solution on AWS Realtime processi ng with Amazon MSK and Stream processing using Apache Spark Streaming on EMR and Amazon Elasticsearch Service with Kibana for dashboards The cab IoT devices collect telemetry data and send to a source hub The source hub is configured to send data in real time to Amazon MSK Using the Apache Kafka producer library APIs Amazon MSK is configured to stream the data into an Amazon EMR cluster The Amazon EMR cluster has a Kafka client and Spark Streaming installed to be able to consume and process the streams of data Spark Streaming has sink connectors which can write data directly to defined indexes of Elasticsearch Elasticsearch cluster s with Kibana can be used for metrics and dashboards Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards are all managed services where AWS manages the undifferentiated heavy lifting of infrastructure management of different clusters which enabl es you to build your application using familiar open source soft ware with few clicks The next secti on takes a closer look at these services Amazon Web Services Streaming Data Solutions on AWS 23 Amazon Managed Streaming for Apache Kafka (Amazon MSK) Apache Kafka is an open source platform that enables customers to capture streaming data like click stream events transactions IoT events and application and machine logs With this information you can develop applications that perform real time analyt ics run continuous transformations and distribute this data to data lakes and databases in real time You can use Kafka as a streaming data store to decouple applications from producer and consumers and enable reliable data transfer between the two comp onents While Kafka is a popular enterprise data streaming and messaging platform it can be difficult to set up scale and manage in production Amazon MSK takes care of these managing tasks and makes it easy to set up configure and run Kafka along w ith Apache Zookeeper in an environment following best practices for high availability and security You can still use Kafka's control plane operations and data plane operations to manage producing and consuming data Because Amazon MSK runs and manages o pensource Apache Kafka it makes it easy for customers to migrate and run existing Apache Kafka applications on AWS without needing to make changes to their application code Scaling Amazon MSK offers scaling operations so that user can scale the cluste r actively while its running When creating an Amazon MSK cluster you can specify the instance type of the brokers at cluster launch You can start with a few brokers within an Amazon MSK cluster Then using the AWS Management Console or AWS CLI you can scale up to hundreds of brokers per cluster Alternatively you can scale your clusters by changing the size or family of your Apache Kafka brokers Changing the size or family of your brokers gives you the flexibility to adjust your MSK cluster’s comput e capacity for changes in your workloads Use the Amazon MSK Sizing and Pricing spreadsheet (file download) to determine the correct number of brokers for your Amazon MSK cluster T his spreadsheet provides an estimate for sizing an Amazon MSK cluster and the associated costs of Amazon MSK compared to a similar self managed EC2 based Apache Kafka cluster After creating the MSK cluster you can increase the amount of EBS storage per broker with exception of decreasing the storage Storage volumes remain available during this Amazon Web Services Streaming Data Solutions on AWS 24 scaling up operation It offers two types of scaling operations : Auto Scaling and Manual Scaling Amazon MSK supports automatic expansion of your cluster's storage in response to increased usage using Application Auto Scaling policies Your auto matic scaling policy sets the target disk utilization and the maximum scaling capacity The storage utilization threshold helps Amazon MSK to trigger an auto matic scaling operation To increase storage using manual scaling wait for the cluster to be in the ACTIVE state Storage scaling has a cooldown period of at least six hours between events Even though the operation makes additional storage available right away the service performs optimizations on your cluster that can take up to 24 hours or more The du ration of these optimizations is proportional to your storage size Additionally it also o ffers multi –Availability Zones replication within an AWS Region to provide High Availability Configuration Amazon MSK provides a default configuration for brokers topics and Apache Zookeeper nodes You can also create custom configurations and use them to create new MSK clusters or update existing clusters When you create an MSK cluster without specifying a custom MSK configuration Amazon MSK creates and uses a default configuration For a list of default values see this Apache Kafka Configuration For monitoring purposes Amazon MSK gathers Apache Kafka metrics and sends them to Amazon CloudWatch where you can view them The metrics that you configure for your MSK cluster are automatically collected and pushed to CloudWatch Monitoring consumer lag enables you to identify slow or stuck consumers that aren't keep ing up with the latest data available in a topic When necessary you can then take remedial actions such as scaling or rebooting those consumers Migrating to Amazon MSK Migrating from on premise s to Amazon MSK can be achieved by one of the following methods • MirrorMaker20 — MirrorMaker20 (MM2) MM2 is a multi cluster data replication engine based on Apache Kafka Connect framework MM2 is a combination of an Apache Kafka source connector and a sink connector You can use a single MM2 cluste r to migrate data between multiple clusters MM2 automatically detects new topics and partitions while also ensuring the topic Amazon Web Services Streaming Data Solutions on AWS 25 configurations are synced between clusters MM2 supports migrations ACLs topics config urations and offset translation For mor e details related to migration see Migrating Clusters Using Apache Kafka's MirrorMaker MM2 is used for use cases related to replication of topics config urations and offs et translation automatically • Apache Flink — MM2 supports at least once semantics Records can be duplicated to the destination and the consumers are expected to be idempotent to handle duplicate records In exactly once scenarios semantics are required customers can use Apache Flink It provides an alternative to achieve exactly once semantics Apache Flink can also be used for scenarios where data requires mapping or transformation actions before submission to the destination cluster Apache Flink provi des connectors for Apache Kafka with sources and sinks that can read data from one Apache Kafka cluster and write to another Apache Flink can be run on AWS by launching an Amazon EMR cluster or by running Apache Flink as an application using Amazon Kinesis Data Analytics • AWS Lambda — With support for Apache Kafka as an event source for AWS Lambda customer s can now consume messages from a topic via a Lambda function The AWS Lambda service internally polls for new records or messages from the event source and then synchronously invokes the target Lambda function to consume these messages Lambda reads the messages in batches and provides the message batches to your function in the event payload for processing Consumed messages can then be transformed and/or written directly to your destination Amazon MSK cluster Amazon EMR with Spark Streaming Amazon EMR is a managed cluster platform that simplifies running big data frameworks such as Apache Hadoop and Apache Spark on AWS to process and analyze vast amounts of data Amazon EMR provides the capabilities of Spark and can be used to st art Spark streaming to consume data from Kafka Spark Streaming is an extension of the core Spark API that enables scalable high throughput fault tolerant stream processing of live data streams You c an create an Amazon EMR cluster using the AWS Command Line Interface (AWS CLI) or on the AWS Management C onsole and s elect Spark and Zeppelin in advanced Amazon Web Services Streaming Data Solutions on AWS 26 configurations while creating the cluster As shown in the following architecture diagram data can be ingested from many sources such as Apache Kafka and Kinesis Data Streams and can be processed using complex algorithms expressed with high level functio ns such as map reduce join and window For more information see Transformations on DStreams Processed data can be pushed out to file systems databases and live dashboards Realtime streaming flow from Apache Kafka to Hadoop ecosystem By default Apache Spark Streaming has a micro batch run model However since Spark 23 came out Apache has introduced a new low latency processing mode called Continuous Processing which can achieve end toend latencies as low as one millisecond with at least once guarantees Without changing the Dataset/DataFrames operations in your queries you can choose the mode based on your application requi rements Some of the benefits of Spark Streaming are : • It brings Apache Spark's language integrated API to stream processing letting you write streaming jobs the same way you write batch jobs • It supports Java Scala and Python • It can recover both lost work and operator state ( such as sliding windows) out of the box without any extra code on your part • By running on Spark Spark Streaming lets you reuse the same code for batch processing join streams against historical data or run ad hoc queries on the stream state and build powerf ul interactive applications not just analytics Amazon Web Services Streaming Data Solutions on AWS 27 • After the data stream is processed with Spark Streaming Elasticsearch Sink Connector can be used to write data to the Amazon ES cluster and in turn Amazon ES with Kibana dashboards can be used as consump tion layer Amazon Elasticsearch Service with Kibana Amazon ES is a managed service that makes it easy to deploy operate and scale Elasticsearch clusters in the AWS Cloud Elasticsearch is a popular open source search and analytics engine for use cases such as log analytics real time application monitoring and clickstream analysis Kibana is an open source data visualization and exploration tool used for log and time series analytics application monitoring and operational intelligence use cases It offers powerful and easy touse features such as histograms line graphs pie charts heat maps and built in geospatial support Kibana provides tight integration with Elasticsearch a popular analytics and search engine which makes Kibana the default choice for visualizing data stored in Elasticsearch Amazon ES provides an installation of Kibana with every Amazon ES domain You can find a link to Kibana on your domain dashboard on the Amazon ES console Summary With Apache Kafka o ffered as a managed service on AWS you can focus on consumption rather than on managing the coordination between the brokers which usually requires a detailed understanding of Apache Kafka Features such as h igh availability broker scalability and granular access control are managed by the Amazon MSK platform ABC1Cabs utilize d these services to build production application without needing infrastructure management expertise They could focus on the processing layer to consume data from Amazon MSK and further propagate to visualization layer Spark Streaming on Amazon EMR can help realtime analytics of streaming data and publish ing on Kibana on Amazon Elasticsearch Service for the visualization layer Amazon Web Services Streaming Data Solutions on AWS 28 Conclusion This document reviewed several scenarios for streaming workflow s In these scenarios streaming data processing provided the example companies with the ability to add new features and functional ity By analyzing data as it gets created you will gain insights into what your business is doing right now AWS streaming services enable you to focus on your application to make time sensitive business decisions rather than deploying and managing the infrastructure Contributors The following individuals and organizations contributed to this document: • Amalia Rabinovitch Sr Solutions Architect AWS • Priyanka Chaudhary Data Lake Data Architect AWS • Zohair Nasimi Solutions Architect AWS • Rob Kuhr Solutions Architect AWS • Ejaz Sayyed Sr Partner Solutions Architect AWS • Allan MacInnis Solutions Architect AWS • Chander Matrubhutam Product Marketing Manager AWS Document versions Date Description September 01 2021 Updated for technical accuracy September 07 2017 First publication
|
General
|
consultant
|
Best Practices
|
Tagging_Best_Practices_Implement_an_Effective_AWS_Resource_Tagging_Strategy
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlTagging Best Practices Implement an Effective AWS Resource Tagging Strategy December 2018 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuranc es from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Content s Introduction: Tagging Use Cases 1 Tags for AWS Console Organization and Resource Groups 1 Tags for Cost Allocation 1 Tags for Automation 1 Tags for Operations Support 2 Tags for Access Control 2 Tags f or Security Risk Management 2 Best Practices for Identifying Tag Requirements 2 Employ a Cross Functional Team to Identify Tag Requirements 2 Use Tags Consistently 3 Assign Owners to Define Tag Value Propositions 3 Focus on Required and Conditionally Required Tags 3 Start Small; Less is More 4 Best Practices for Naming Tags and Resources 4 Adopt a Standardized Approach for Tag Names 4 Standardize Names for AWS Resources 5 EC2 Instances 6 Other AWS Resour ce Types 6 Best Practices for Cost Allocation Tags 7 Align Cost Allocation Tags with Financial Reporting Dimensions 7 Use Both Linked Accounts and Cost Allocation Tags 8 Avoid Multi Valued Cost Allocation Tags 9 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Tag Everything 9 Best Practices for Tag Governance and Data Management 9 Integrate with Authoritative Data Sources 9 Use Compound Tag Values Judiciously 10 Use Automation to Proactively Tag Resources 12 Constrain Tag Values with AWS Service Catalog 12 Propagate Tag Values Across Related Resources 13 Lock Down Tags Used for Access Control 13 Remediate Untagged Resources 14 Implement a Tag Governance Process 14 Conclusion 15 Contributors 15 References 15 Tagging Use Cases 15 Align Tags with Financial Reporting Dimensions 16 Use Both Linked Accounts and Cost Allocation Tags 16 Tag Everything 16 Integrate with Authoritative Data Sources 16 Use Compound Tag Values Judiciously 16 Use Automation to Proactively Tag Resources 17 Constrain Tag Values with AWS Service Catalog 17 Propagate Tag Values Across Related Resources 17 Lock Down Tags Used for Access Control 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Remediate Untagged Resources 17 Document Revisions 18 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Abstract Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage search for and filter resources Although there are no inherent types of tags they enable customers to categorize resources by purpose owner environment or other criteria Without the use of tags it can become diff icult to manage your resources effectively as your utilization of AWS services grows However it is not always evident how to determine what tags to use and for which types of resources The goal of this whitepaper is to help you develop a tagging strategy that enables you to manage your AWS resources more effectively This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 1 Introduction: Tagging Use Cases Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage se arch for and filter resources by purpose owner environment or other criteria AWS tags can be used for many purposes Tags for AWS Console Organization and Resource Groups Tags are a great way to organize AWS resources in the AWS Management Console You can configure tags to be displayed with resources and can search and filter by tag By default the AWS Management Console is organized by AWS service However the Resource Groups tool allows customers to create a custom console that organizes and consolidates AWS resources based on one or more tags or portions of tags Using this tool customers can c onsolidate and view data for applications that consist of multipl e services and resources in one place Tags for Cost Allocation AWS Cost Explorer and Cost and Usage Report support the ability to break down AWS costs by tag Typically customers use bu siness tags such as cost center business unit or project to associate AWS costs with traditional financial reporting dimensions within their organization However a cost allocation report can include any tag This allows customers to easily associate costs with technical or security dimensions such as specific applications environments or compliance programs Table 1 shows a partial cost allocation report Table 1: Partial cost allocation report Tags for Automation Resource or service specific tags are often used to filter resources during infrastructure automation activities Tags can be used to opt in to or out of automated tasks or to identify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 2 specific versions of resources to archive update or delete For examp le many customers run automated start/stop scripts that turn off development environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2) instance tags are a simple way to identify the specific develo pment inst ances to opt into or out of this process Tags for Operations Support Tags can be used to integrate support for AWS resources into day today operations including IT Service Management (ITSM) processes such as Incident Management For example Le vel 1 support teams could use tags to direct workflow and perform business service mapping as part of the triage process when a monitoring system triggers an alarm Many customers also use tags to support processes such as backup/restore and operating syst em patching Tags for Access Control AWS Identity and Access Management ( IAM) policies support tag based conditions enabling customers to constrain permissions based on specific tags and their values For example IAM user or role permissions can include conditions to limit access to specific environments ( for example development test or production) or Amazon Virtual Private Cloud (Amazon VPC) networks based on their tags Tags for Security Risk Management Tags can be assigned to identify resources that require heightened security risk management practices for example Amazon EC2 instance s hosting applications that process sensitive or confidential data This can enable automated compliance checks to ensure that proper access controls are in place patc h compliance is up to date and so on The sections that fol low identify recommended best practices for developing a comprehensive tagging strategy Best Practices for Identifying Tag Requirements Employ a Cross Functional Team t o Identify Tag Requirements As noted in the introduction tags can be u sed for a variet y of purposes In order to develop a comprehensive strategy it’s best to assemble a cross functional team to identify tagging This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 3 requirements Tag stakeholders in an organization typically include IT Finance Information Security application owners cloud a utomation teams middleware and database administration teams and process owners for functions such as patching backup/restore monitoring job scheduling and disaster re covery Rather than meeting with each of these functional areas separately to ident ify their tagging needs conduct tagging requirements workshops with representation from all stakeholder groups so that each can hear the perspectives of the others and integrate their requirements more effectively into the overall strategy Use Tags Cons istently It’s important to employ a consistent approach in tagging your AWS resources If you intend to use tags for specific use cases as illustrated by the examples in the introduction you will need to rely on the consistent use of tags and tag values For example if a significant portion of your AWS resources are missing tags used for cost allocation your cost analysis and reporting process will be more complicated and time consuming and probably less accurate Likewise if resources are missing a t ag that identifies the presence of sensitive data you may have to assume that all such resources contain sensitive data as a precautionary measure A consistent approach is warranted even for tags identified as optional For example if you employ an opt in approach for automatically stopping development environments during non working hours identify a single tag for this purpose rather than allowing different teams or departments to use their own ; resulting in many diffe rent tags all serving the same purpose Assign Owners to Define Tag Value Propositions Consider tags from a cost/benefit perspective when deciding on a list of required tags While AWS does not charge a fee for the use of tags there may be indirect costs (for example the labor needed to assign and maintain correct tag values for each relevant AWS resource ) To ensure tags are useful i dentify an owner for each one The tag owner has the responsibility to clearly articulate its value proposition Having tag owners may help avoid unnecessary costs related to maintaining tags that are not used Focus on Required and Conditionally Required Tags Tags can be required conditionally required or optional Conditionally required tags are only mandatory under certai n circumstances (for example if an application processes sensitive data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 4 you may require a tag to identify the corresponding data classification such as Personally Identifiable Information or Protected Health Information ) When identifying tagging requirements focus on required an d conditionally required tags Allow for optional tags as long as they conform to your tag naming and governance policies t o empower your organization to define new tags for unforeseen or bespoke application requ irements Start Small ; Less is More Tagging decisions are reversible giving you the flexibility to edit or change as needed in the future However there is one exception —cost allocation tags —which are included in AWS monthly cost allocation reports The data for these reports is based on AWS services utilization and captured monthly As a result when you introduce a new cost allocation tag it take s effect starting from that point in time The new tag will not apply to past cost allocation reports Tags help you identify sets of resources Tags can be removed when no longer needed A new tag can be applied to a set of resources in bulk however you need to identify the resources requiring the new tag and the value to assign those resources Start with a smaller set of tags that are known to be need ed and create new tags as the need arise s This approach is recommended over specifying an overabundance of tags that are anticipated to be needed in the future Best Practices for Naming Tags and Resources Adopt a Standardized Approach for Tag Names Keep in mind that names for AWS tags are case sensitive so ensure that they are used consistently For example the tags CostCenter and costcenter are different so one might be configured as a cos t allocation tag for financial analysis and reporting and the other one might not be Similarly the Name tag appears in the AWS Console for many resources but the name tag does not A number of tags are predefined by AWS or created automatically by various AWS services Many AWS defined tags are named using all lowercase with hyphen s separating words in the name and prefixes to identify the source service for the tag For example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 5 • aws:ec2spot:fleet request id identifies the Amazon EC2 Spot Instance Request that launched the instance • aws:cloudformation:stack name identifies the AWS CloudFormation stack that created the resource • lambda console:blueprint identifies blueprint used as a te mplate for an AWS Lambda function • elasticbeanstalk:environment name identifies the applic ation that created the resource Consider naming your tags using all lowercase with hyphens separating words and a prefix identifying the organization name or abbreviated name For example for a fictitious company named AnyCompany you might define tags such as : • anycompany :cost center to identify the internal Cost Center code • anycompany :environment type to identif y whether the environment is developmen t test or production • anycompany :application id to identify the application the resource was created for The prefix ensure s that tags are clearly identified as having been defined by your organization and not by AWS or a third party tool that you may be u sing Using all lowercase with hyphens for separators avoids confusion about how to capitalize a tag name For example anycompany :project id is simpler to remember than ANYCOMPANY :ProjectID anycompany :projectID or Anycompany :ProjectId Standardize Names for AWS Resources Assigning names to AWS resources is another important dimension of tagging that should be considered This is the value that is assigned to the predefined AWS Name tag (or in some cases by other means) and is mainly used in the AWS Management Console To understand the idea here it’s probably not helpful to have dozens of EC2 instances all named MyWebServer Developing a naming standard for AWS resources will help you keep your resources organized and can be used in AWS Cost and Usage Reports for grouping related resources together (see also Propagate Tag Values Across Related Resources below) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 6 EC2 Instances Naming for EC2 instances is a good place to start Most organizations have already recognized the need to standardize on server hostnames and have existing practices in effect For example an organization might create hostnames based on several components such as physical location environment type (development test production ) role/ purpose application ID and a unique identifier: First note that the various components of a hostname construction process like this are great candidates for individual AWS tags – if they were important in the past they’ll likely be important in the future Even if the se elements are captured as separate individual tags i t’s still reasonable to continue to use this style of server naming to maintain consistency and substituting a different physical location code to represent AWS or an AWS region However if you’re moving away from treating your virtual instances like pets and more like cattle (which is recommended ) you’ll want to automate the assignment of server names to avoid having to assign them manually As an alternative you could simply use the AWS instance id (which is globally unique) for your server name s In either case if you ’re also creating DNS names for servers it’s a good idea to associate the value used for the Name tag with the Ful ly Qualified Domain Name ( FQDN) for the EC2 instance So if your instance name is phlpwcspweb3 the FQDN for the server could be phlpwcspweb3a nycompany com If you’d rather use the instance id for the Name tag then y ou should use that in your FQDN (for example i06599a3 8675anycompany com) Other AWS Resource Types For other types of AWS resources one approach is to adopt a dot notation consisting of the following name components : 1 account name prefix: for example production development shared services audit etc Philadelphia data center productionweb tier Customer Service Portalunique identifier phlpwcspweb3 = phl p w csp web3 hostname:This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 7 2 resource name: freeform field for the logical name of the resource 3 type suffix: for example subnet sg role policy kmskey etc See Table 2 for examples of tag names for other AWS resource types Table 2: Sample tag names for other AWS resource types Resource Type Example AWS Resource Name account name resource name type Subnet prod public az1subnet Production public az1 subnet Subnet services az2subnet Shared Services az2 subnet Security Group prod webserversg Production webserver sg Security Group devwebserversg Development webserver sg Security Group servicesdmzsg Shared Services dmz sg IAM Role prodec2 s3accessrole Production ec2s3 access role IAM Role drec2 s3accessrole Disaster Recovery ec2s3 access role KMS Key proda nycompany kmskey Production AnyCompan y kmskey Some resource types limit the character set that can be used for the name In such cases the dot character s can be replaced with hyphen s Best Practices for Cost Allocation Tags Align Cost Allocation Tags with Financial Reporting Dimensions AWS provides detailed cost reports and data extracts to help you monitor and manage your AWS spend When you designate specific tags as cost allocation tags in the AWS Billing and Cost Management Console billing data for AWS resources will include the m Remember b illing information is point intime data so cost allocation tags appear in your billing data only after This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 8 you have (1) specified them in the Billing and Cost Management Console and (2) tagged resources with them A natural place to identify the cost allocation tags you need is by looking at your current IT financial reporting practices Typically financ ial reporting covers a variety of dimensions such as business unit cost center product geographic area or department Aligning cost allocation tags with these financial reporting dimensions simplif ies and streamline s your AWS cost management Use Both Linked Accounts and Cost Allocation Tags AWS resources are c reated within accounts and billing reports and extracts contain the AWS account number for all billable resources regardless of whether or not the resources have tags You can have multiple accounts so creating different accounts for different financial entities within your organization is a way to clearly segregate costs AWS provides options for consolidated billing by associating payer accounts and linked accounts You can also use AWS Organizations to c reate master accounts with associated member accounts to take advantage of the additional centralized management and governance capabilities Organizations may design their account structure based on a number of factors including fiscal isolation administrative isolation access isolation blast radius isolation engineering and cost considerations ( refer to the References section for links to relevant articles on AWS Answers) Examples include: • Creating separate accounts for production and non product ion to segregate communications and access for these environments • Creating a separate account for shared services components and utilities • Creating a separate audit account to captur e log files for security forensics and monitoring • Creating separate accounts for disaster recovery Understand your organization ’s account structure when developing your tagging strategy since alignment of some of the financial reporting dimensions may already be captured by your account structure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 9 Avoid Multi Valued Cost Allocation Tags For shared resources you may need to allocate costs to several applications projects or departments One appro ach to allocating costs is to create multi valued tags that contain a series of allocation codes possibly with corresponding allocation ratios for example: anycompany :cost center = 1600|02 5|1625|020|1731|050|1744|005 If designated as a cost allocation tag such tag values appear in your billing data However there are two challenges with this approach: (1) the data will have to be post processed to parse the multi valued tag value s and produce more detailed records a nd (2) you will need to establish a process to accurately set and maintain the tag values If possible consider identify ing existing cost sharing or chargeback mechanisms within your organization —or create new ones —and associate shared AWS resources to individual cost allocation codes defined by that mechanism Tag Everything When developing a tagging strategy be wary of focus ing only on the set of tags need ed for your EC2 instances Remember that AWS allows you to tag most types of resources that generat e costs on your billing reports Apply your cost allocation tags across all resource types that support tagging to get the most accurate data for your financial analysis and reporting Best Practices for Tag Governance and Data Management Integrate with Authoritative Data Sources You may decide to include tags on your AWS resources for which data is already available within your organization For example if you are using a Configuration Management Database (CMDB) you may already have a pr ocess in place to store and maintain metadata about your applications databases and environments Configuration Items (CIs) in your CMDB may have attributes including application or server owner technical issue resolver groups cost center or charge cod e data classification etc Rather than redundantly capturing and maintain ing such existing meta data in AWS tags consider integrating your CMDB with AWS The integration can be bi directional meaning that This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 10 data sourced from the CMDB can be copie d to tag s on AWS resources and data that can be sourced from AWS (for example IP addresses instance IDs and instance types) can be stored as attribu tes in your Configuration Items If you integrate your CMDB with AWS in this way extend your AWS tag naming convention to include an additional prefix to identify tags that have externally sourced values for example: • anycompany :cmdb:application id – the CMDB Configuration Item ID for th e application that owns the resource • anycompany :cmdb:cost center – the Cost Center code associated with the owning application sourced from the CMDB • anycompany :cmdb:application owner – the indiv idual or group that owns the application associated with this resource sourced from the CMDB This makes it clear that the tags are provided for convenience and that the authoritative source of the data is the CMDB Referencing authoritative data sources rather than redundantly maintaining the same data in mul tiple systems is a general data management best practice Use Compound Tag Values Judiciously Initially AWS limited the number of tags for a given resource to 10 result ing in some organizations combin ing several data elements into a single tag using de limiters to segregate the different attributes as in: EnvironmentType = Developm ent;Webserver;Tomcat 62;Tier 2 In 2016 the number of tags per resource was increased to 50 (with a few exceptions such as S3 objects ) Because of this it’ s generally recommended to follow good da ta management practice by including only one data attribute per tag However there are some situations where it may make sense to combine several related attributes together Some examples include: 1 For contact infor mation as shown in Table 3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 11 Table 3: Examples of compound and single tag values Compound Tag Values anycompany :business contact = John Smith;johnsmith@a nycompany com ;+12015551212 anycompany :technical contact = Susan Jones ;suejones@a nycompany com ;+12015551213 Single Tag Values anycompany :busi ness contact name = John Smith anycompany :business conta ctemail = johnsmith@a nycompany com anycompany :busines scontact phone = +12015551212 anycompany :techni calcontact name = Susan Jones anycompany :technical cont actemail = suejones@a nycompany com anycompany :technica lcontact phone = +12015551213 2 For multi valued tags where a single attribute can have several homogenous values For example a resource support ing multiple applications might use a pipe delimited list: anycompany :cmdb: application ids = APP012|APP 045|APP320|APP450 However before introducing multi valued tags consider the source of the information and how the information will be used if captured in an AWS tag If there is an authoritative source for the data in question then any processes requiring the information may be better served by re ferencing the authoritative source directly rather than a tag Also as recommended in this paper avoid multivalued cost allocation tags if possible 3 For tags used for automation purposes Such tags typically capture opt in and automation status inform ation For example if you implement an AWS Lambda function to automatically back up EBS volumes by taking snapshots you might use a tag that contains a short JSON document: anycompany :auto snapshot = { “frequency”: “daily” “ lastbackup”: “2018 0419T21:18:00000+0000” } This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 12 There are many automation solutions available at AWS Labs ( https://githubcom/awslabs ) and the AWS Marketplace ( https://awsamazo ncom/marketplace ) that make use of compound tag value s in their implementation s Use Automation to Proactively Tag Resources AWS offers a variety of tools to help you implement proactive tag governance practices ; by ensuring that tags are consistently app lied when resources are created AWS CloudFormation provides a common language for provision ing all the infrastructure resources in your cloud environment CloudFormation templates are simple text file s that create AWS resources in an automated and secure manner When you create AWS resources using AWS CloudFormation templates you can use the CloudFormation Resource Tags property to apply t ags to certain resource types upon creation AWS Service Catalog allows organizations to create and manage catalogs o f IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application environments AWS Service Catalog enables a self service capability for users allowing them to provision the services they need while also helping you to maintain consistent governance – including the application of required tags and tag values AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely Using IAM you can create and manage AWS users and groups and use permissions to allow or deny their access to AWS resources When you create IAM policies you can specify resource level p ermissions which include specific permissions for creating and deleting tags In addition you can include condition keys such as aws:RequestTag and aws:TagKeys which will prevent resources from being created if specific tags or tag values are not prese nt Constrain Tag Values with AWS Service Catalog Tags are not useful if they contain missing or invalid data values If tag values are set by automation the automation code can be reviewed tested and enhanced to ensure that valid tag values are used When tags are entered manually there is the opportunity for human error One way to reduce human error is by using AWS Service Catalog One of the key features of AWS Service Catalog is TagOption libraries With TagOption libraries you can specify requir ed tags as well as their range of allowable values AWS Service Catalog organizes your approved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 13 AWS service offerings or products into multiple portfolios You can use TagOption libraries at the portfolio level or even at the individual product level t o specify the range of allowable values for each tag Propagate Tag Values Across Related Resources Many AWS resources are related For example an EC2 instance may have several Elastic Block Storage (EBS) volumes and one or more Elastic Network Interfaces (ENIs) For each EBS volume many EBS snapshots may be created over time For consistency best practice is to propagate tags and tag values across related resources If resources are created by AWS CloudFormation templates they are created together in g roups called stacks from a common automation script which can be configured to set tag values across all resources in the stack For resources not created via AWS CloudFormation you can still implement automation to automatically propagate tags from rela ted resources For example when EBS snapshots are created you can copy any tags present on the EBS volume to the snapshot Similarly you can use CloudWatch Events to trigger a Lambda function to copy tags from an S3 bucket to objects within the bucket a ny time S3 objects are created Lock Down Tags Used for Access Control If you decide to use tags to supplement your access control policies you will need to ensure that you restrict access to creating deleting and modifying those tags For example you can create IAM policies that use conditional logic to grant access to (1) EC2 instances for an IAM group created for developers and (2) for EC2 instances tagged as development This could be further restricted to developers for a particular application based on a condition in the IAM policy that identifies the relevant application ID While the use of tags for this purpose is convenient it can be easily circumvented if users hav e the ability to modify tag values in order to gain access that they should not have Take preventative measures against this by ensur ing that your IAM policies include deny rules for actions such as ec2:C reateTags and ec2:DeleteTags Even with this preven tative measure IAM policies that grant access to resources based on tag values should be used with caution and approved by your Information Security team You may decide to use this approach for convenience in certain situations For example use strict I AM policies (without conditions based on tags) for restricting access to production environments ; but for development environments grant access to application specific resources via tags to help developers avoid inadvertently affecting each other’s work This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 14 Remediate Untagged Resources Automation and proactive tag management are important but are not always effective Many customers also employ reactive tag governance approaches to identify resourc es that are not properly tagged and correct them Reactive tag governance approaches include (1) programmatically using tools such as the Resource Tagging API AWS Config r ules and custom scripts ; or (2) manually using Tag Editor and detailed billing reports Tag Editor is a feature of the AWS Management Console that allows you to search for resources using a variety of search criteria and add modify or delete tags in bulk Search criteria can include resources with or without the presence of a particular tag or value The AWS Resource Tagging API allows you to perform these same functions programmatically AWS Config enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the eva luation of recorded configurations against optimal configurations With AWS Config you can create rules to check resources for required tags and it will continuously monitor your resources against those rules Any non compliant resources are identified on the AWS Config Dashboard and via notifications In the case where resources are initially tagged properly but their tags are subsequently changed or deleted AWS Config will find them for you You can use AWS Config with CloudWatch Events to trigger autom ated responses to missing or incorrect tags An extreme example would be to automatically stop or quarantine non compliant EC2 instances The most suitable governance approach for a n organization primarily depends on its AWS maturity model but even experi enced organizations use a combination of proactive and reactive governance techniques Implement a Tag Governance Process Keep in mind that once you’ve settled on a tagging strategy for your organization you will need to adapt it as you progress through your cloud journey In particular it’s likely that requests for new tags will surface and need to be addressed A basic tag governance process should include : • impact analysis approval and implementation for requests to add change or deprecate tags ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 15 • application of existing tagging requirements as new AWS services are adopted by your organization; • monitoring and remediation of missing or incorrect tags; and • periodic reporting on tagging metrics and key process indicators Conclusion AWS resource tags can be used for a wide variety of purposes from implementing a cost allocation process to supporting automation or authorizing access to AWS resources Implementing a tagging strategy can be challenging for some organizations due to th e number of stakeholder groups involved and considerations such as data sourcing and tag governance This white paper recommends a way forward based on a set of best practices to get you started quickly with a tagging strategy that you can adapt as your organization’s needs evolve over time Contributors The following individuals and organizations contributed to this document: Brian Yost Senior Consultant AWS Professional Services References Tagging Use Cases • AWS Tagging Strategies • Tagging Your Amazon EC2 Resources • Centralized multi account and multi Region patching with AWS Systems Manager Automation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 16 Align Tags with Financial Repor ting Dimensions • Monthly Cost Allocation Report • User Defined Cost Allocation Tags • Cost Allocation for EBS Snapshots • AWS Generated Cost Allocation Tags Use Both Linked Acc ounts and Cost Allocation Tags • Consolidated Billing for Organizations • AWS Multiple Account Billing Strategy • AWS Multiple Account Security Strategy • What Is AWS Organi zations? Tag Everything User Defined Cost Allocation Tags Integrate with Authoritative Data Sources ITIL Asset and Configuration Management in the Cloud Use C ompound Tag Values Judiciously Now Organize Your AWS Resources by Using up to 50 Tags per Resource This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 17 Use Automation to Proactively Tag Resources • How c an I use IAM policy tags to restrict how an EC2 instance or EBS volume can be created? • How to Automatically Tag Amazon EC2 Resources in Response to API Events • Supported Resource Level Permissions for Amazon EC2 API Actions: Resource Level Permissions for Tagging • Example Policies for Working with the AWS CLI or an AWS SDK: Tagging Resources • Resource Tag Constrain Tag Values with AWS Service Catalog • AWS Service Catalog Announces AutoTags for Automatic Tagging of Provisioned Resources • AWS Service Catalog TagOption Library Propagate Tag V alues Across Related Resources CloudWatch Events for EBS Snapshots Lock Dow n Tags Used for Access Control • AWS Services That Work with IAM • How do I create an IAM policy to control access to Amazon EC2 resources using tags? • Controlling Access to Amazon VPC Resources Remediate Untagged Resources • Resource Groups and Tagging for AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 18 • AWS Resource Tagging API Document Revisions Date Description December 2018 First Publication
|
General
|
consultant
|
Best Practices
|
The_Total_Cost_of_Non_Ownership_of_a_NoSQL_Database_Cloud_Service
|
ArchivedThe Total Cost of (Non) Ownership of a NoSQL Database Cloud Service Jinesh Varia and Jose Papo March 2012 This paper has been archived To find the latest technical content about the AWS Cloud go to the AWS Whitepapers & Guides page on the AWS website: https://awsamazoncom/whitepapers/
|
General
|
consultant
|
Best Practices
|
U.S._Securities_and_Exchange_Commissions_SEC_Office_of_Compliance_Inspections_and_Examinations_OCIE_Cybersecurity_Initiative_Audit_Guide
|
ArchivedUS Securities and Exchange Commissi on’s (SEC) Office of Compliance Insp ections and Examinations (OCIE) Cybersecurity Initi ative Audit Guide October 2015 This paper has been archived For the latest technical guidance on Security and Compliance refer to https://awsamazoncom/architecture/security identitycompliance/ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 2 of 21 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 3 of 21 Contents Executive Summary 4 Approaches for using AWS Audit Guides 4 Examiners 4 AWS Provided Evidence 4 OCIE Cybersecurity Audit Checklist for AWS 6 1 Governance 6 2 Network Configuration and Management 8 3 Asset Configuration and Management 9 4 Logical Access Control 10 5 Data Encryption 12 6 Security Logging and Monitoring 13 7 Security Incident Response 14 8 Disaster Recovery 15 9 Inherited Controls 16 Appendix A: References and Further Reading 18 Appendix B: Glossary of Terms 19 Appendix C: API Calls 20 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 4 of 21 Executive Summary This AWS US Securities and Exchange Commission’s (SEC) Office of Compliance Inspections and Examinations (OCIE) Cybersecurity Initiative audit guide has been designed by AWS to guide financial institutions which are subject to SEC audits on the use and security architecture of AWS services This document is intended for use by AWS financial institution customers their examiners and audit advisors to understand the scope of the AWS services provide guidance for implementation and discuss examination when using AWS services as part of the financial institutions environment for customer data Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shared Responsibility” model between AWS and the customer The audi t guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements In general AWS services should be treated similar to onpremise infrastructure services that have been traditionally used by customers for their operating services and applications Policies and processes that apply to devices and servers should also apply when those functions are supplied by AWS services Controls pertaining solely to policy or pr ocedure generally are entirely the responsibility of the customer Similarly management of access to AWS services either via the AWS Console or Command Line API should be treated like other privileged administrator access See the appendix and referenced points for more information AWS Provided Evidence AWS services are regularly assessed against industry standards and requirements In an attempt to support a variety of industries including federal agencies retailers international organizations health care providers and financial institutions AWS elects to have a variety of assessments performed ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 5 of 21 against the services and infrastructure For a complete list and information on assessment performed by third parties please refer to AWS Compliance web site Archived Amazon Web Services – OCIE Cybersecurity Audit Guide September 2015 Page 6 of 21 OCIE Cybersecurity Audit Checklist for AWS The AWS compliance program ensures that AWS services are regularly audited against applicable standards Some control statements may be satisfied by the customer’s use of AWS (for instance Physical access to sensitive data) However most controls have either shared responsibilities between AWS and the customer or are entirely the customer’s responsibility This audit checklist describes the customer responsibilities specific to the OCIE Cybersecurity Initiative when utilizing AWS services 1 Governance Definition: Governance includes the elements required to provide senior management assurance that its direction and intent are reflected in the security posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services the customer has purchased what kinds of systems and information the customer plan s to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Un derstand what AWS services and resources are being used by the customer and ensure that the customer ’s security or risk management program has taken into account the ir use of the public cloud environment Audit approach: As part of this audit determine who within the customer’s organization is an AWS account owner and resource owner and what kinds of AWS services and resources they are using Verify that the customer’s policies plans and procedures include cloud concepts and that cloud is included in t he scope of the customers audit program Governance Checklist Checklist Item Documentation and Inventory Verify that the customer ’s AWS network is fully documented and all AWS critical systems are included in their inventory docume ntation with limited access to this documentation Review AWS Config for AWS resource inventory and configuration history of resources (Example API Call 1) Ensure that resources are appropriately tagged with a customer’s application and/or customer data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 7 of 21 Checklist Item Review application architecture to identify data flows planned connectivity between application components and resources that contain customer data Review all connectivity between the custome r’s network and AWS Platform by reviewing the following: VPN connections where the customers on premise Public IPs are mapped to customer gateways in any VPCs owned by the Customer (Example API Call 2 & 3) Dire ct Connect Private Connections which may be mapped to 1 or more VPCs owned by the customer (Example API Call 4 ) Risk Assessment Ensure the customer’s risk assessment for AWS services includes potential cybersecurity threats vulnerabilities and business consequences Verify that AWS services were included in the customer’s risk assessment and privacy impact assessment Verify that system characterization was documented for AWS services as part of the risk assessment to identify and rank information assets IT Security Program and Policy Verify that the customer includes AWS services in its security policies and procedures including AWS account level best practices as highlighted within the AWS service Trusted Advisor which provides best practice and guidance across 4 topics – Security Cost Performance and Fault Tolerance Review the customer’s information securit y policies and ensure that it includes AWS servic es and reflects the Identify Theft Red Flag Rules (17 CFR § 248 — Subpart C —Regulation S ID) Confirm that the customer has assigned an employee (s) as an authority for the use and security of AWS services and there are defined roles for those noted key roles including a Chief Information Security Officer Note any published cybersecurity risk management process standards the customer has used to model their information security architecture and processes Ensure the customer maintains documentation to supp ort the audits conducted for their AWS services including its review of AWS third party certifications Verify that the customer’s internal training records includes AWS security such as Amazon IAM usage Amazon EC2 Security Groups and remote access to Amazon EC2 instances Confirm that the customer maintains a cybersecurity response policy and training for AWS services Note any insurance specifically related to the customers use of AWS services and any claims related to losses and expenses attributed to cybersecurity events as a result ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 8 of 21 Checklist Item Service Provider Oversight Verify that the customer’s contract with AWS includes a requirement to implement and maintain privacy and security safeguards for cybersecurity requirements 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure that their network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS whether additional transmission protection is needed in the form of a VPN and whether to limit inbound and outbound traffic Customers who must perform monitoring of their network can do so using host based intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer ’s private networks Note: AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network seg mentation is applied within the customers AWS environment Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and firewall setting s on AWS services (Example API Call 5 8) Verify that the customer has a procedure for granting remote internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and sy stems ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 9 of 21 Checklist Item Review the following to ensure the customer maintains an environment for testing and development of software and applications that is separate from its business environment: VPC isolation is in place between business environment and environments us ed for test and development VPC peering connectivity is between VPCs This ensure s network isolation is in place between VPCs Subnet isolation is in place between business environment and environments used for test and development NACLs are associated with Subnets in which Business and Test/Development environments are located to ensure network isolation is in place subnets Amazon EC2 instance isolation is in place between the business environment and environments used for test and development Security Groups associated to 1 or more Instances within the Business Test or Development environments ensure network isolation between Amazon EC2 instances Review the customer’ s DDoS layered defense solution running that operates directly on AWS which are leveraged as part of a DDoS solution such as: Amazon CloudF ront configuration Amazon S3 configuration Amazon Route 53 ELB configuration The above serv ices do not use Customer owned Public IP addresses and offer DoS AWS inherited DoS mitigation features Usage of Amazon EC2 for Proxy or WAF Further guidance can be found within the “ AWS Best Practices for DDoS Resiliency Whitepaper ” Malicious Code Controls Assess the implementation and management of anti malware for Amazon EC2 instances in a similar manner as with physical systems 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything they install on or connect to their AWS resources Secure management of the customers ’ AWS resources means knowing what resources the customer is using (asset inventory) securely configuring the guest OS and applications on the customers resources (secure configuration settings patching and antimalware) and controlling changes to the customers resources (change management) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 10 of 21 Major audit focus: Customers must manage their operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate the customers OS and applications are designed configured patched and hardened in accordance to the customer’s policies procedures and standards All OS and application management practices can be common between onpremise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Assess configuration management Verify the use of the customer’s configuration management practices for all AWS system components and validate that these standards meet the customer baseline configurations Review the customer’s procedu re for conducting a specialized wipe procedure prior to deleting the volume for compliance with their established requirements Review the customers Identity Access Management system which may be used to allow authenticated access to the customer’s applica tions hosted on top of AWS services Confirm the customer completed penetration testing including the scope for the tests Change Management Controls Ensure the customer’s use of AWS services follows the same change c ontrol processes as internal series Verify that AWS services are included within the customer’s internal patch management process Review documented process es for c onfiguration and patching of Amazon EC2 instances: Amazon Machine Images (AMIs) (Example API Call 9 10) Operating systems Applications Review the customer’s API Calls for in scope services for delete calls to ensure the customer has properly disposed of IT assets 4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 11 of 21 resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer’s corporate directory (single sign on) AWS IAM enables a customer ’s users to securely control access to AWS servi ces and resources Using IAM a customer can create and manage AWS users and groups and use permissions to allow and deny their permissions to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up in AWS for the services being used by the customer It is also important to ensure that the credentials associated with all of the customer’s AWS accounts are being managed securely by the customer Audit approach: Validate that permissions for AWS assets are being managed in accordance with organizational policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for managing access to AWS services and Amazon EC2 instances Ensur e the customer documents their use and configuration of AWS access controls examples and options outlined below : Description of how Amazon IAM is used for access management List of controls that Amazon IAM is used to manage – Resource management Securi ty Groups VPN object permissions etc Use of native AWS access controls or if access is managed through federated authentication which leverages the open standard Security Assertion Markup Language (SAML) 20 List of AWS Accounts Roles Groups and Us ers Policies and policy attachments to users groups and roles (Example API Call 11) A description of Am azon IAM accounts and roles and monitoring methods A description and configuration of systems within EC2 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 12 of 21 Checklist Item Remote Access Ensure there is an approval process logging process or controls to prevent unauthorized remote access Note: All access to AWS and Amazon EC2 instances is “remote access” by definition unless Direct Connect has been co nfigured Review the customer’s process for preventing unauthorized access which may include: AWS CloudT rail for logging of Service level API calls AWS CloudW atch logs to meet logging objectives IAM Policies S3 Bucket Policies Security Groups for con trols to prevent unauthorized access Review the customer’s connectivity between the customer’s network and AWS: VPN Connection between VPC and Firms network Direct Connect (cross connect and private interfaces) between customer and AWS Defined Secu rity Groups Network Access Control Lists and Routing tables in order to control access between AWS and the customer’s network Personnel Control Ensure that the customer restricts users to those AWS services strictly required for thei r business function (Example API Call 12) Review the type of access control the customer has in place as it relates to AWS services AWS access control at an AWS level – using IAM with Tagging to control management of Amazon EC2 instances (start/stop/terminate) within networks Customer Access Control – using the customer IAM (LDAP solution) to manage access to resources which exist in networks at the Operating System / Application layers Network Access control – using AWS Security Groups(SGs) Network Access Control Lists (NACLs) Routing Tables VPN Connections VPC Peering to control network access to resources within customer owned VPCs 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However some customers who have sensitive data may require additional protection by encrypting the data when it is stored on AWS Only Amazon S3 service currently provides an automated server side encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 13 of 21 Major audit focus: Data at rest should be encrypted in the same way as the customer protects onpremise data Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper protection of customers ’ data could create a security exposure for the customer Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential customer information in transport while using AWS services Review methods for connection to AWS Console management A PI S3 RDS and Amazon EC2 VPN for enforcement of encryption Review internal policies and procedures for key management including AWS services and Amazon EC2 instances Review encryption methods used if any to protect customer PINs at Rest – AWS offer s a number of key management services such as KMS AWS CloudHSM and Server Side Encryption for S3 which could be used to assist with data at rest encryption (Example API Call 13 15) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within a customer ’s information systems and networks Audit logs are used to identify activity that may impact the security of those systems whether in realtime or after the fact so the pro per configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on the customers Amazon EC2 instances and that implementation is in alignment with the customer’s policies and procedures especially as it relates to the storage protection and analysis of the logs ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 14 of 21 Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review logging and monitoring policies and procedures for adequacy retention defined thresholds and secure maintenance specifically for detecting unauthorized activity within AWS services Review the customer’s logging and monitoring policies and procedures and ensure their inc lusion of AWS services including Amazon EC2 instances for security related events Verify that logging mechanisms are configured to send logs to a centralized server and ensure that for Amazon EC2 instances the proper type and format of logs are retain ed in a similar manner as with physical systems For customers usi ng AWS CloudWatch review the customer’s process and record of their use of network monitoring Ensure the customer utilizes analytics of events to improve their de fensive measures and pol icies Review AWS IAM Credential report for unau thorized users AWS Config and resource tagging for unauthorized devices (Example API Call 16) Confirm the customer aggregates and correlates event data from multipl e sources The customer may use AWS services such as: a) VPC Flow logs to identify accepted/rejected network packets entering VPC b) AWS CloudT rail to identify authenticated and unauthenticated API calls to AWS services c) ELB Logging – Load balancer logging d) AWS CloudF ront Logging – Logging of CDN distributions Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems Review AWS provided evidence on where information on intru sion detection processes can be reviewed 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may be monitored by the interaction of both AWS and AWS customers AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application The customer should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 15 of 21 Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporti ng Ensure that the customer’s incident response plan and policy for cybersecurity incidents includes AWS services and addresses controls that mitigate cybersecurity incidents and recovery Ensure the customer is leveraging existing incident monitoring to ols as well as AWS available tools to monitor the use of AWS services Verify that the Incident Response Plan undergoes a periodic review and that changes related to AWS are made as needed Note if the Incident Response Plan has customer notification pro cedures and how the customer addresses responsibility for losses associated with attacks or instructions impacting customers 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact to the customer While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design would often utilize multiple components in different AWS availability zones and involve data replication ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 16 of 21 Audit approach: Understand the DR strategy for the customer’s environment and determine the faulttolerant architecture employed for the customer ’s critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities Disaster Recovery Checklist : Checklist Item Business Continuity Plan (BCP) Ensure there is a comprehensive BCP for A WS services utilized that addresses mitigation of the effects of a cybersecurity incident and/or recover y from such an incident Within the Plan ensure that AWS is included in the customer’s emergency preparedness and crisis management elements senior m anager oversight responsibilities and the testing plan Backup and Storage Controls Review the customer’s periodic test of their backup system for AWS services (Example API Call 17 18) Review i nventory of data backed up to AWS services as off site backup 9 Inherited Controls Definition: Amazon has many years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate that the customer conducted the appropriate due diligence in selecting service providers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 17 of 21 Audit approach: Understand how the customer can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can b e reviewed that are managed by AWS for physical security controls ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 18 of 21 Appendix A: References and Further Reading 1 Amazon Web Services: Introduction to AWS Security https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Security pdf 2 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Com pliance_Whitepaperpdf 3 Using Amazon Web Services for Disaster Recovery http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf 4 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 5 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=sear chQuery&x=20&y=25&fromSearch=1&searchPath=all&searchQuery=iden tity%20federation 6 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8& queryArg=searchQuery&fromSearch=1&searchQuery=Token%20Vending %20machine 7 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 8 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 9 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 19 of 21 Appendix B: Glossary of Terms API: Application Programming Interface (API) in the context of AWS These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS AWS provides SDKs and CLI reference which allows customers to programmatically manage AWS services via API Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make webscale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 20 of 21 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services Read more: http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws and http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 3 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 4 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 5 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 6 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 7 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 8 Alternatively use Security Group focused CLI: aws ec2 describesecuritygroups 9 List AMI currently owned/registered by the customer aws ec2 describeimages –owners self 10 List all Instances launched with a specific AMI aws ec2describeinstances filters “Name=image idValues=XXXXX” (where XXXX = imageid value eg ami12345a12 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 21 of 21 11 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 12 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies rolename XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 13 List KMS Keys aws kms listaliases 14 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 15 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes "Name=encryptedValues=true" targeted eg useast 1) 16 Credential Report aws iam generatecredentialreport aws iam getcredentialreport 17 Create Snapshot/Backup of EBS volume aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 18 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX)
|
General
|
consultant
|
Best Practices
|
Understanding_T2_Standard_Instance_CPU_Credits
|
Understanding T2 Standard Instance CPU Credits March 8 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s products or services are provid ed “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agree ment between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Earned CPU Credits 1 Launch CPU Credits 1 CPU Utilization and CPU Credits 2 CPU Credit Earn Rates and CPU Utilization Rates 3 CPU Credit Earn Rates and Instance Sizes 4 Baseline Rates and Instance Sizes 5 CPU Credit Accrual Limits and the Discarding of Credits 6 The Five Phases in the CPU Credit System 7 Example: Tracking CPU Credit Usage 8 Period A — Balance at Maximum 9 Period B — Balance Stable 10 Period C — Balance Decreasing 11 Period D — Balance Decreasing 12 Period E — Balance Stable 13 Period F — Balance Decreases to Almost Zero 14 Period G — Balance at Minimum 15 Period H — Balance Increasing 16 Period I — Balance Increasing 17 Period J — Balance at Maximum 18 T2 Standard Instance Launch Credits 19 Launch Credit Allocati on Limits 20 The Effects of Launch Credits on the CPU Credit Balance 21 Example: Tracking CPU Credit Accrual and Usage with Launch Credits 22 Period A — Launch Credits + 24 Hours of Earned C redits 23 Period B — Maximum Earned and Launch Credits 24 Period C — Spending Earned Credits 25 Period D — Balance Stable 24 Hours of Earned Credits 26 Period E — Spending Earned Credits 27 Period F — Accruing Earned Credits 28 Period G — Balance Stable 24 Hours of Earned Credits 30 Comparing T2 Instance Sizes With Identical Workloads 31 Scenario 1: Consuming CPU Credits at Different Rates 32 Scenario 2: Consuming 72 Credits Every 24 Hours 33 Scenario 3: Consuming 76 Credits Every 24 Hours 34 Scenario 4: Steady and Gradual Depletion of Credit Balance 35 Scenario 5: Variable CPU Utilization Rate 36 Scenario 6: Variable CPU Utilization Duration 36 Scenario 7: Consuming CPU Credits Immediately After Launch 37 Instances with Multiple vCPUs 38 Conclusion 39 Contributors 39 Further Reading 39 Document Revisions 39 Abstract Choosing the best Amazon EC2 instance type for your workload can be a challenge especially if you are considering using a burstable instance type such as a T2 Standard instance This document describes how a T2 Standard instance earns CPU credits how launch credits are allocated and how those launch and earned CPU credits are spent Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 1 Introduction Most Amazon Elastic Compute Cloud (Amazon EC2) instance types provide a fixed level of CPU performance However the burstable performance instance types T2 and T3 provide a baseline level of CPU performance with the ability to burst to a higher level (a bove that baseline) as required The ability to use vCPUs at a rate higher than the baseline CPU utilization rate is governed by a CPU credit system Unlike the T2 Unlimited and T3 instance types in addition to earned credits T2 Standard instances can a lso be allocated launch credits These two types of credits are treated differently and because the credit balance is presented as a single numeric value it can be difficult to understand how the credits work Earned CPU Credits As a burstable instance ty pe is running it earns CPU credits The rate at which an instance earns credits is based on the instance size —larger instance sizes earn CPU credits at a faster rate CPU credits are earned in fractions of credits and are allocated at 5 minute intervals Up to 24hrs of earned credits can be accrued in the credit balance to be used later to burst above the baseline CPU utilization rate Launch CPU Credits A T2 Standard instance is allocated launch CPU credits during the instance launch provided that the AW S account has not exceeded its launch credit limit (See the Launch Credit Allocation Limits section for details ) These launch credits enable the instance to burst above the baseline CPU utilization rate immediately after launch — before any earned CPU credits have been accrued by the instance Launch credits are spent before earned CPU credits A ny unspent launch credi ts in the balance do not affect the accumulation of earned CPU credits Note : When a T2 instance is stopped (shut down) all CPU credits remaining within the CPU credit balance are forfeited Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 2 CPU Utilization and CPU Credits During periods of CPU utilizat ion (above 0%) CPU credits are redeemed for CPU time used The utilization and corresponding CPU credit costs are calculated at millisecond granularity The following three vCPU utilization scenarios all result in the usage of 1 CPU credit: • 1 vCPU @ 100% utilization for 60 seconds • 1 vCPU @ 50% utilization for 120 seconds • 2 vCPUs @ 25% utilization for 120 seconds The following three vCPU utilization scenarios all result in the usage of 05 CPU credits: • 1 vCPU @ 100% utilization for 30 seconds • 1 vCPU @ 50% utilization for 60 seconds • 2 vCPUs @ 25% utilization for 60 seconds Table 1: CPU Utilization Percentage vs Credit Utilization Rate Table vCPU Utilization Rate Credits per Minute Credits per Hour 100% 1 60 75% 075 45 50% 05 30 30% 03 18 25% 025 15 20% 02 12 15% 015 9 10% 01 6 5% 005 3 0% 0 0 Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 3 CPU Credit Earn Rate s and CPU Utilization Rate s The CPU credit earn rate for an instance depends on the instance size and is directly related to the CPU utilization baseline For example a t2small instance has a baseline CPU utilization rate of 20% and earns 12 CPU credits per hour In the next three exam ples we see the effect of 3 different CPU utilization rates for a t2small instance : below the baseline (10%) at the baseline (20%) and above the baseline (30%): Example CPU Utilization Rate CPU Credits Spent Description 10% 6 / hour CPU credits are being spent at a slower rate than they are being earned 20% 12 / hour CPU credits are being spent at the same rate than they are being earned 30% 18 / hour CPU credits are being spent at a faster rate than they are being earned Amazon Web Services Understanding T2 Standard I nstance CPU Credits Page 4 CPU Credit Earn Rates and Instance Sizes T2 Standard instances are available in multiple sizes to match different workloads The number of vCPUs the CPU credit earn rate and the amount of memory varies by instance size as shown in the following table and graph Instance Size CPU Credits Earned per 24 Hours CPU Credits Earned per Hour Maximum CPU Credit Balance *1 Baseline CPU Utilization *2 Launch Credits Granted Number of vCPUs Amount of Memory (GiB) t2nano 72 3 102 5% 30 1 05 t2micro 144 6 174 10% 30 1 1 t2small 288 12 318 20% 30 1 2 t2medium 576 24 636 40% 60 2 4 t2large 864 36 924 60% 60 2 8 t2xlarge 1296 54 1416 90% 120 4 16 t22xlarge 1944 81 2184 135% 240 8 32 *1 – The maximum CPU credit balance includes launch credits Launch credits are allocated at launch and are not replenished after they are spent *2 – Baseline CPU utilization is based on the equivalent utilization rate for a single vCPU See the multiple vC PU section for details Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 5 Base line Rates and Instance Sizes The baseline CPU utilization rate for an instance is determined by the instance size — larger instance sizes have a higher baseline rate The per vCPU utilization rate and the associated CPU credits spent do not vary with T2 Standard instance sizes One minute of 100% vCPU utilization on a t2nano t2micro or t2small equates to 1 CPU credit (See “Instances with Multiple vCPUs” for inform ation on CPU credit usage for t2medium and larger instances) In the following example the CPU utilization rate for both instances is 15% (9 CPU credits per hour) This utilization rate is above the baseline rate for a t2micro instance but below the ba seline rate for a t2small instance: Instance Details CPU base rate = 10% Credit earn rate = 6 / hour Instance Details CPU base rate = 20% Credit earn rate = 12 / hour Current utilization details CPU utilization rate – 15% Credit utilization rate – 9 / hour Current utilization details CPU utilization rate – 15% Credit utilization rate – 9 / hour Result Credits are being spent at a faster rate than they are being earned Result Credits are being spent at a slower rate than they are being earned Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 6 CPU Credit Accrual Limits and the Discarding of Credits The maximum number of earned CPU credits that can be accrued by a T2 Standard instance varies by instance size As the following diagram shows a larger instance size has a larger bucket for accruing CPU credits During time periods where the CPU credit spend rate is lower than the CPU credit earn rate after the maximum number of earned CPU credits have been acc rued any additional earned credits are discarded Note: To avoid the complexity associated with lau nch credits the next examples describing CPU credits exclude launch credits See “T2 Standard Launch Credits” for a complete discussion t2micro t2small Amazon Web Services Underst anding T2 Standard Instance CPU Credits Page 7 The Five Phases in the CPU Credit System Phase Details Credit Balance Chart State Before State After Balance Increasing During periods where the CPU credit spend rate is less than the earn rate you accumulate credits Balance Decreasing During periods where the CPU credit spend rate is greater than the earn rate your credit balance declines Balance Stable During periods where the CPU credit spend rate is the same as the earn rate the number of accumulated credits remains unchanged Balance at Maximum During periods where the CPU spend rate is less than the earn rate and you have the maximum number of CPU credits accrued additional earned credits are discarded Balance at Minimum During periods where the credit balance is nearly depleted the maximum utilization rate is restricted to the base rate (The credit balance does not reach zero) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 8 Example: Tracking CPU Credit Usage In this section we illustrate CPU credit usage over time and its effect on the CPU credit balance for a t2small instance over 3 days The 3 days are divided into 10 separate periods identified by the letters A through J and each period is described ind ividually in the following sections At the start of this example assume the following: • The credit balance contains the maximum number of earned CPU credits (288) that can be accrued by a t2small instance • The credit balance consists only of earned CPU credits There are no launch credits in the balance (A later example includes launch credits) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 9 Period A — Balance at Maximum During this first period the credit utilization rate is zero and the number of earned credits is at the maximum limit of 24 hours of earned credits (288) Any newly earned credits are discarded Period A Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 12 credits per hour (100% of credit earn rate) Credit Balance Balance is stable at 288 credits (0 launch credits and 288 earned CPU credits) Start of Period A End of Period A Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 10 Period B — Balance Stable During this period the credit utilization rate is equal to the credit earn rate therefore credits are being replaced as they are spent This results in the balance remaining unchanged at 288 credits Period B Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at 288 credits (0 launch credits and 288 earned CPU credits) Start of Period B End of Period B Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 11 Period C — Balance Decreasing During this period the credit utilization rate is two times the credit earn rate therefore credits are being consumed from the credit balance faster than they can be replenished by earned credits Period C Credit Spend Rate 24 credits per hour (200% of credit earn rate) 40% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 12 credits per hour Change rate = earn rate (12 ) spend rate (24) Start of Period C End of Period C Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 12 Period D — Balance Decreasing During this period the credit utilization rate is three times higher than the credit earn rate therefore credits are being consumed from the credit balance at a faster rate than during period C Period D Credit Spend Rate 36 credits per hour (300% of credit earn rate) 60% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 24 credits per hour Change rate = earn rate (12 ) spend rate (36) Start of Period D End of Period D Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 13 Period E — Balance Stable During this period as in period B the credit utilization rate is equal to the credit earn rate Therefore credits are being replaced as they are spent resulting in the balance remaining stable Period E Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at 72 credits (0 launch credits and 72 earned CPU credits) Start of Period E End of Period E Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 14 Period F — Balance Decreases to Almost Zero During this period the instance was consuming CPU credits two times faster than they are being earned Because there were enough CPU credits in the credit balance the workload was able to run unrestricted most of this period However near the end of the period when the credit balance was nearly depleted the CPU credit system restricted the maximum attainable CPU utilization to the base rate for a t2small instance 20% Period F Credit Spend Rate 24 credits per hour (200% of credit earn rate) 40% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance decreases at a rate of 12 credits per hour Change rate = earn rate (12) spend rate (24) At the end of period F the credit balance is nearly depleted and the CPU utilization is limited to the base rate Start of Period F End of Period F Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 15 Period G — Balance at Minimum During this period the credit balance remains stable near zero as the number of CPU credits are being spent as fast as they are earned When the credit balance is near zero the maximum attainable CPU utilization is restricted to the baseline for the instance size which is 20% in the case of a t2small Even if the workload required a similar vCPU utilization rate to what it had in periods C D and F the T2 Standard CPU credit system limits it to the base rate Period G Credit Spend Rate 12 credits per hour (100% of credit earn rate) 20% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance is stable at almost zero credits (0 launch credits and almost zero earned CPU credits) Start of Period G End of Period G Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 16 Period H — Balance Increas ing During this period the credit utilization rate is half of the credit earn rate and CPU credits are being added to the credit balance at a rate if 6 per hour Period H Credit Spend Rate 6 credits per hour (50% of credit earn rate) 10% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance increases at a rate of 6 credits per hour Change rate = earn rate (12) spend ra te (6) End of Period H Start of Period H Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 17 Period I — Balance Increas ing During this period the credit utilization rate is zero and all earned CPU credits are being added to the credit balance at a rate of 12 per hour which is double that of period H By the end of the period the credit balance contains the maximum number of earned credits allowed Period I Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 0 credits per hour (0% of credit earn rate) Credit Balance Balance increases at a rate of 12 credits per hour Change rate = earn rate (12) spend rate (0) End of Period I Start of Period I Amazon Web Services Understanding T2 St andard Instance CPU Credits Page 18 Period J — Balance at Maximum During this period as in period A the credit utilization rate is zero and the credit balance contains the maximum number of earned credits allowed (288) Any newly earned and unspent credits are discarded Period J Credit Spend Rate 0 credits per hour (0% of credit earn rate) 0% CPU utilization Credit Earn Rate 12 credits per hour Credit Discard Rate 12 credits per hour (100% of credit earn rate) Credit Balance Balance is stable at 288 credits Change rate = earn rate (12) – spend rate (0) discard rate (12) End of Period J Start of Period J Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 19 T2 Standard Instance Launch Credits Launch credits enable a T2 Standard instance to burst above the baseline level of CPU utilization immediately after launch —prior to it having earned CPU credits and accruing them in the credit balance Launch credits only apply to T2 Standard instances Launch credit features: • Launch credits are added to the overall CPU credit balance • Launch credits are spent before earned CPU credits • Launch credits do not affect t he accumulation of earned CPU credits • Launch credits do not get replenished while the instance is running • Launch credits are not allocated when the allocation limit is exceeded If you don’t take these features into account under certain circumstances the CPU credit balance can seem to behave in ways that you might not expect For example: • The CPU credit balance can plateau at different values • The CPU credit balance can experience different behavior over time even if the workload CPU utilization rate is unchanged To better understand the effect of launch credits on the overall CPU credit balance picture the credit balance as being comprised of two buckets of credits instead of one: • A bucket for the accrued earned CPU credits which is filled during times when the spend rate is lower than the earn rate • A second bucket for the launch credits that is filled at launch time but does not get replenished while the instance is running Overall credit balance = 174 ( 144 earned ) + ( 30 launch ) Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 20 Launch Credit Allocation Limits Launch credits are only allocated to T2 Standard instances during their launch if the particular instance launch is within the account’s Launch Credit Allocation Limit The default limit is 100 la unches or starts per account per region per rolling 24 hour period The limit can be reached through any combination of launches (or stops and starts) within the same account and same region during the same rolling 24 hour period For example: • 100 new T2 Standard instance launches or • 100 existing T2 Standard instance stops and starts or • 50 existing T2 Standard instance stops and starts and 50 new T2 Standard instance launches Note : If you are regularly exceeding the launch credit allocation limit you might want to switch to a T2 Unlimited or T3 instance instead Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 21 The Effects of Launch Credits on the CPU Credit Balance If a T2 Standard instance is launched but does not consume all of the launch credits within the first 24 hours then the credi t balance will consist of the remaining launch credits plus 24 hours of earned credits For example a t2nano instance could potentially accrue a total of 102 credits (72 earned credits plus 30 launch credits) The instance could then spend all 102 credit s in a single continuous burst as illustrated in period 1B in the following graph Note: Launch credits in the credit balance are illustrated by the blue line in the graph Remember that Amazon CloudWatch only reports total credits —you cannot see the bre akdown of launch credits and earned CPU credits Attaining a credit balance that is higher than the 24 hour earned CPU credit value can only be achieved one time per instance launch because after the launch credits are spent they are not replenished Any subsequent CPU credit accruals are limited to the value of 24 hours of earned credits as illustrated at the start of period 2B in the graph Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 22 Example: Tracking CPU Credit Accrual and Usage with Launch Credits In this section we illustra te CPU credit accrual and usage for a t2micro instance over a 4 day period considering the effect that launch credits have on the credit balance This example is specifically tailored to highlight some of the complexity that can be associated with launch credits Because of launch credits the credit balance for periods A B and C in this example is above the t2micro 24hour earned credit value of 144 Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 23 Period A — Launch Credits + 24 Hours of Earned Credits Immediately upon the launch of the t2micro instance 30 launch credits are added to the overall credit balance and the instance starts to earn credits Because no CPU credits are spent or discarded during this period the credit balance increases at a rate of 6 credits per hour In addition to the 30 launch credits after 24 hours the instance has accrued 144 earned CPU credits The credit balance is able to increase above 144 credits because the unspent launch credits do not affect the accumulation of earned CPU credits Period A (duration 24 hours) 12 AM Monday – 12 AM Tuesday Credit Spend Rate 0 credits per hour (0% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance increases from 30 at launch to 174 credits (30 launch credits and 144 earned CPU credits) Start of Period A End of Period A Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 24 Period B — Maximum Earned and Launch Credits At the start of period B the credit balance is 174 credits The overall balance consists of 30 launch credits and 144 earned credits Because the credit balance contains the maximum number of earned CPU credits for a t2micro instance (144 credits) any newly earned credits above this limit are discarded This results in the credit balance plateauing at a value equal to 24 hours of earned credits (144) plus the unspent launch credits (30) Period B (duration 6 hours) 12 AM T uesday – 6 AM Tuesday Credit Spend Rate 0 credits per hour (0% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 6 credits per hour Credit Balance Balance remains stable at 174 credits (30 launch credits and 144 earned CPU credits) Start of Period B End of Period B Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 25 Period C — Spending Earned Credits In period C the instance consumes CPU credits at a rate of 3 credits per hour (50% of the credit earn rate) Despite the spend rate being less than the earn rate the overall credit balance is decreasing at a rate equal to the credit spend rate (3 credits per hour) This occurs because the non replenishable launch credits are being spent first and all freshly earned CPU credits are being discarded because the credit balance already has the maximum number of earned CPU credits (144) Period C (duration 10 hours) 6 AM Tuesday – 4 PM Tuesday Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 6 credits per hour Credit Balance Balance decreases from 174 to 144 credits (0 launch credits and 144 earned CPU credits) Start of Period C End of Period C Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 26 Period D — Balance Stable 24 Hours of Earned Credits In period D the instance continues to consume CPU credits at a rate of 3 credits per hour (50% of the credit earn rate) as it did in period C The credit balance contains the maximum number of earned credits (144) Half of the newly earned CPU credits ar e being spent while the other half are being discarded Therefore the balance now plateaus at 144 credits instead of at the 174 credit level seen in period B because there are no longer any launch credits in the credit balance Period D (duration 8 hours) 4 PM Tuesday – 12 AM Wednesday Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 3 credits per hour Credit Balance Balance is stable at 144 credits (0 launch credits and 144 earned CPU credits) Start of Period D End of Period D Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 27 Period E — Spending Earned Credits In period E the instance is consuming CPU credits at a rate of 12 credits per hour (200% of the credit earn rate) The credit balance decreases at a rate of 6 credits per hour from 144 to 72 credits Period E (duration 12 hours) 12 AM Wednesday – 12 PM Wednesday Credit Spend Rate 12 credits per hour (20% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance decreases from 144 to 72 credits (0 launch credits and 72 earned CPU credits) Start of Period E End of Period E Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 28 Period F — Accruing Earned Credits In period F as in periods C and D the instance is consuming CPU credits at a rate of 3 per hour (50% of the credit earn rate) The credit balance decreased during period C was stable during period D however it increases in period F Why is that? • In per iod C the credit balance contained launch credits in addition to the maximum number of earned credits The launch credits were being spent and all of the newly earned and unspent earned CPU credits were being discarded • In period D the credit balance co ntained the maximum number of earned credits H alf of the newly earned CPU credits were being spent while the other half were being discarded • In period F the number of earned CPU credits is under the 24 hour maximum (144) No credits are being discarded half of the newly earned CPU credits are being spent while the other half are being accrued in the credit balance This results in the overall credit balance increasing at half of the earn rate Period F (duration 24 hours) 12 PM Wednesday – 12 PM Thurs day Credit Spend Rate 3 credits per hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 0 credits per hour Credit Balance Balance increases from 72 to 144 credits (0 launch credits and 144 earned CPU credits) Amazon Web Services Understanding T2 Standard Insta nce CPU Credits Page 29 Period F (duration 24 hours) 12 PM Wednesday – 12 PM Thurs day End of Period F End of Period F Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 30 Period G — Balance Stable 24 Hours of Earned Credits In period G the instance continues to consume CPU credits at a rate of 3 per hour (50% of the credit earn rate) which is the same as periods C D and F However because the credit balance contains the maximum number of earned credits any freshly earned but unspent CPU credits are discarded Period G (duration 12 hours) 12 PM Thursday – 12 AM Friday Credit Spend Rate 3 credits p er hour (5% CPU utilization) Credit Earn Rate 6 credits per hour Credit Discard Rate 3 credits per hour Credit Balance Balance is stable at 144 credits (0 launch credits and 144 earned CPU credits) Start of Period G End of Period G Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 31 Comparing T2 Instance Sizes with Identical Workloads In this section we will be repeating the same workload (green line) on different sizes of T2 Standard instances to illustrate the effect that different CPU credit earn rates have on the CPU credit balance All three of the instances t2nano t2micro and t2small have a single vCPU and are allocated 30 launch credits The instances have different CPU credit earn rates with the maximum earned CPU credit accrual limits being 72 144 and 288 credits respectively Larger instance sizes have larger maximum credit balances If the same workload is repeated across the three instances the credit balance changes will differ due to the different earn rates that are offsetting the same spen ding rate Amazon Web Services Understandi ng T2 Standard Instance CPU Credits Page 32 Scenario 1: Consuming CPU Credits at Different Rates In this scenario the first utilization period had a vCPU utilization rate of 40% (24 credits per hour) which consumed a total of 100 CPU credits over the 250 minute duration of the period The change in credit balance depends on the instance size: t2nano — credit balance decreased by approximately 91 credits t2micro — credit balance decreased by approximately 82 credits t2small — credit balance decreased by approximately 65 credits In the second utilization period the difference in the credit balance depletion rate is more apparent The vCPU utilization rate of 20% (12 credits per hour) is equal to the CPU credit earn rate of a t2small instance so its credit balance does not decrea se However the credit balances for the smaller instances do decrease Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 33 Scenario 2: Consuming 72 Credits Every 24 Hours t2nano — The daily credit utilization of 72 credits drains the entire credit balance of the instance during the 24 hour period t2micro — The daily credit utilization of 72 credits partially depletes the credit balance during the 24 hour period t2small — The peak credit usage rate is lower than the credit earn rate of a t2small instance therefore the credit balance (red line ) remains stable This graph shows all three instance sizes for comparison Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 34 Scenario 3: Consuming 76 Credits Every 24 Hours t2nano — The daily credit usage rate exceeds the daily CPU credit earn rate During periods of low CPU credit utilization the balance is partially replenished The credit balance will eventually be depleted over time t2micro — The daily credit usage rate is lower than the daily CPU credit earn rate During periods of low credit utilization the credit balance is fully replenished t2small — The peak credit usage rate is lower than the CPU credit earn rate for a t2small instance therefore the credit balance (red line) remains stable This graph shows all three instance sizes for comparison Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 35 Scenario 4: Steady and Gradual Depletion of Credit Balance t2nano — The 7% CPU utilization workload starts 14 hours after the instance is launched and consumes CPU credits at a rate of approximately 4 per hour The spend rate is higher than the earn rate of 3 credits per hour for a t2nano therefore the credit balance gradually decreases The credit balance is depleted approximately 72 hours after launch at which point the maximum attainable CPU utilization is restricted to the base rate of 5% (3 credits per hour) t2micro — The 7% CPU utilization (approximately 4 credits per hour) workload is lower than the base earn rate of a t2micro instance (10% or 6 credits per hour) Therefore the credit balance does not decrease and the workload can continue at this utilization rate Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 36 Scenario 5: Variable CPU Utilization Rate In this scenario the duration of the daily workload varies On Thurs day the workload increased to the point where it almost depleted the credit balance Scenario 6: Variable CPU Utilization Duration In this scenario the duration of the daily workload is slowly and gradually increasing If the workload continues to incr ease in this manner it might result in a depletion of the credit balance Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 37 Scenario 7: Consuming CPU Credits Immediately After Launch A total of 99 CPU credits are required to complete this workload —ideally at a rate of 9 credits per hour t2nano — The workload running at a rate of 9 CPU credits per hour consumes the 30 launch credits in approximately 33 hours and then begins to consume accrued earned CPU credits Approximately 5 hours after launch the credit balance is depleted and the maximum a ttainable CPU utilization is restricted to the base rate (5%) At this reduced utilization rate the workload is restricted and requires approximately 23 hours to complete t2small — The workload running at a rate of 9 CPU credits per hour has a lower spend rate than the base rate for a t2small instance of 12 credits per hour The workload can run unrestrained and requires approximately 11 hours to complete Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 38 Instances with Multiple vCPUs T2 instance sizes larger than t2small have more than 1 vCPU The individual vCPUs consume credits from the single credit balance based on their individual CPU utilization rates The CPU credit utilization rate for an instance is the aggregate of the credit utilization rate across all of the vCPUs In the following example one vCPU is consuming 45 credits per hour while the other vCPU is consuming 15 credits per hours Therefore the total credit utilization for this instance is 60 credits per hour Note: A t2medium or t2large instance with 2 vCPUs can consume up to 2 CPU credits in 1 minute A t2xlarge instance with 4 vCPUs can consume up to 4 CPU credits in 1 minute A t22xlarge instance with 8 vCPUs can consume up to 8 CPU credits in 1 minute An instance’s specified baseline % rate is based on a single vCPU For example a t2medium baseline rate is spe cified as 40% Which can equate to 1 x vCPU @ 40% utilization or 2 x vCPUs @ 20% utilization Amazon Web Services Understanding T2 Standard Instance CPU Credits Page 39 Conclusion Having an in depth understanding of how the T2 Standard instance CPU credit system works will help you decide if this particular Amazon EC2 instance ty pe is the best match for your workload If so this knowledge will assist you with optimizing your workload and obtaining the best cost and performance Contributors Contributors to this document include: • Seamus Murray Amazon Web Services Further Reading For additional information see: • AWS Documentation: Burstable Performance Instances1 Document Revisions Date Description March 8 2021 Reviewed for technical accuracy March 1 2019 Second publication February 4 2019 First publication Notes 1 https://docsawsamazoncom/AWSEC2/latest/UserGuide/burstable performance instanceshtml
|
General
|
consultant
|
Best Practices
|
Understanding_the_ASDs_Cloud_Computing_Security_for_Tenants_in_the_Context_of_AWS
|
Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS June 2017 © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own ind ependent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations co ntractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agree ment between AWS and its customers Contents Introduction 1 AWS Shared Responsibility approach to Managing Cloud Security 2 What does the shared responsibility model mean for the security of customer content? 3 Understanding ASD Cloud Computing Security for Tenants in the Context of AWS 4 General Risk Mitigations 4 IaaS Risk Mitigations 27 PaaS Risk Mitigations 38 SaaS Risk Mitigations 40 Further Reading 41 Document Revisions 42 Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 1 Introduction The Australian Signals Directorate (ASD) publishes the Cloud Computing Security for Tenants paper to provide guidance on how an organisations’ cyber security team cloud architects and business representatives can work together to perform a risk assessment and use cloud services securely The paper highlights the shared responsibility that organisation s (referred to as Tenants) share with the cloud service providers (CSP) to design a solution that uses security best practices This document addresses each risk identified in the Cloud Computing Security for Tenants paper and describes the AWS services and features that you can use to mitigate those risks Important: You should understand and acknowledge that that the risks discu ssed in this document cover only part of your responsibilities for securing your cloud solution For more information about the AWS Shared Responsibility Model see AWS Shared Responsibility Approach to Managing Cloud Security below AWS provides you with a wide range of security functionality to protect your data in accordance with ASD’s Information Security Manual ( ISM ) controls agency guidelines and policies We are continually iterating on the security tools we provide our customers and regularly release enhancements to existing security functionality AWS has assessed ASD’s ISM controls against the following services: • Amazon Elastic Compute Cloud (Amazon EC2) – Amazon EC2 provides resizable compute capacity in the cloud It is designed to make webscale computing easier for developers For more information go here • Amazon Simple Storage Service (S3) – Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data at any time from anywhere on the web For more information go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 2 • Amazon Virtual Private Cloud (VPC) – Amazon VPC provides the ability for you to provision a logically isolated section of AWS where you can launch AWS resources in a virtual network that you define For more information go here • Amazon Elastic Block Store (EBS) – Amazon EBS provides highly available highly reliable predictable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance For more information g o here Important: AWS provides many services in addition to those listed above If you would like to use a service not listed above you should evaluate your workloads for suitability Contact AWS Sales and Business Development for a detailed discussion of security controls and risk acceptance considerations Our global whitepapers have recommendations for securing your data that are just as applicable to Australian government workloads on AWS For a complete list of our security and compliance whitepapers see the AWS Whitepapers website Our AWS Compliance website contains more specific discussions of security AWS Risk and Compliance practices certifications and reports If you need answers to questions that are not covered in the above resou rces you can contact your account manager directly AWS Shared Responsibility approach to Managing Cloud Security When you move your IT infrastructure to AWS you will adopt a model of shared responsibility between you and AWS (as shown in Figure 1) This shared model helps relieve your operational burden because AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 3 As par t of the shared model you are responsible for managing the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group f irewall and other security related features You will also generally connect to the AWS environment through services that you acquire from third parties (for example internet service providers) As AWS does not provide these connections they are part of your area of responsibility You should consider the security of these connections and the security responsibilities of such third parties in relation to your systems Figure 1: The AWS Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for you to understand and distinguish between: • Security measures that AWS implements and operates – “security of the cloud” • Security measures that you implement and operate related to the security of your content and applications that make use of AWS services – “security in the cloud” Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 4 While AWS manages the security of the cloud security in the cloud is your customer responsibility as you retain control of what security you choose to implement to protect your own content platform applications systems and networks – no differently than you would for applications in an on site data centre Understanding ASD Cloud Computi ng Security for Tenants in the C ontext of AWS The following sections describe the AWS compliance and AWS offerings that can help you as the Tenant mitigate the risks identified in the Cloud Computing Security for Tenants paper General Risk Mitigations 1 – General Requirement Use a cloud service that has been assessed certified and accredited against the ISM at the appropriate classification level addressing mitigations in the document Cloud Computing Security for Cloud Service Providers AWS Response An independent IRAP assessor examined the controls of in scope AWS services’ people process and technology to ensure they address the needs of the ISM AWS has been certified for Unclassified DLM (UD) workloads by the Australian Signals Directo rate (ASD) as the Certification authority and is an inaugural member of the ASD Certified Cloud Services List (CCSL) 2 – General Requirement Implement security governance involving senior management directing and coordinating security related activities including robust change management as well as having technically skilled staff in defined security roles Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 5 AWS Response AWS customers are required to maintain adequate governance over t he entire IT control environment regardless of how IT is deployed This is true for both on premise and cloud deployments Leading practices include : • Develop an understanding of required compliance objectives and requirements (from relevant sources) • Establ ish a control environment that meets those objectives and requirements • Understand the validation required based on the organization’s risk tolerance • Verify the operating effectiveness of their control environment AWS provides options to apply various ty pes of controls and verification methods Strong customer compliance and governance might include the following basic approach: 1 Review information available from AWS together with other information to understand as much of the entire IT environment as po ssible and then document all compliance requirements 2 Design and implement control objectives to meet the enterprise compliance requirements 3 Identify and document controls owned by outside parties 4 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help you gain a better understanding of your control environment and will help you clearly delineate the verification activities that you need to per form You can run nearly anything on AWS that you would run on premise including websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks AWS provides services that are designed to w ork together so that you can build complete solutions An often overlooked benefit of migrating workloads to AWS is the ability to achieve a Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 6 higher level of security at scale by utilizing the many governance enabling features offered For the same reason s that delivering infrastructure in the cloud has benefits over on premise delivery cloud based governance offers a lower cost of entry easier operations and improved agility by providing more oversight security control and central automation The Governance at Scale whitepaper describes how you can achieve a high level of governance of your IT resources using AWS 3 – General Requirement Implement and annually test an incident response plan covering data spills electronic discovery and how to obtain and analyse evidence eg time synchronised logs hard disk images memory snapshots and metadata AWS Response AWS recognizes the importance of customers implementing and testing an incident response plan Using AWS you can requisition compute power storage and other services in minutes and have the flexibility to choose the development plan or programming model that makes the most sense for the problems you’re trying to solve You pay only for what you use with no up front expenses or long term commitments making AWS a cost effective way to deliver applications plus conduct incident response tests and simulations in realistic environments This presentation from A WS re:Invent 2015 conference provides further details on incident response simulation on AWS The AWS platform includes a range of monitoring services that can be leveraged as part of your i ncident detection and response capability some Inscope services include the following: • CloudWatch • CloudWatch Logs • CloudWatch Events • Cloudtrail • Trusted Advisor • Elastic Load Balancer Logs • S3 logs • Cloudfront logs • VPC Flow Logs Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 7 • Simple Notification Service • Lambd a 4 – General Requirement Use ASD approved cryptographic controls to protect data in transit between the Tenant and the CSP eg application layer TLS or IPsec VPN with approved algorithms key length and key management AWS Response AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted Customers may also use third party encryption technologies In addition customers can leverage AWS Key Management Sys tems (KMS) to create and control encryption keys (refer to https://awsamazoncom/kms/) All of the AWS APIs are available via TLS protected endpoints which provide server authentication AWS cryptographic proces ses are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS ISO 27001 and FedRAMP For Tenants leveraging the Amazon Elastic Load Balancer in their solutions it has s ecurity features relevant to this mitigation Elastic Load Balancing has all the advantages of an on ‐premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports creation and management of security groups associated with your Elastic Load Balancing to provide additional netwo rking and security options • Supports end ‐to‐end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long‐ term secret key to generate a short ‐term session key to be used between the server and the browser to create th e ciphered (encrypted) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 8 message Amazon Elastic Load Balancing con figures your load balancer with a pre ‐defined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer The pre ‐defined cipher set provides compatibility with a broad range of client s and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI SOX etc) from clients to ensure that standards are met In these cases Amazon Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers You can choose to enable or disable the ciphers depending on your specific requirements To help ensure the use of newer and stronger cipher suites when establishing a secur e connection you can configure the load balancer to have the final say in the cipher suite selection during the client ‐serv er negotiation When the Server Order Preference option is selected the load balancer will sel ect a cipher suite based on the server’s prioritization of cipher suites rather than the client’s This gives you more control over the level of security that clients use to connect to your load balancer For even greater communication privacy Amazon Elastic Load Balancer allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decodin g of captured data even if the secret long ‐term key itself is compromised 5 – General Requirement Use ASD approved cryptographic con trols to protect data at rest on storage media in transit via post/courier between the tenant and the CSP when transferring data as part of on boarding or off boarding AWS Response Snowball is a petabyt escale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud Using Snowball addresses common challenges with large scale data transfers including high network costs long transfer times and security concerns Transferring data with Snowball is simple fast secure and can be as little as one fifth the cost of high speed Internet Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 9 Snowball encrypts all data with AES 256bit encryption You manage your encryption keys by using the AWS Key Man agement Service (AWS KMS) Your keys are never sent to or stored on the appliance Further details on the AWS KMS are available in this paper In addition to using a tamper resistant enclosure Snowball uses an industry standard Trusted Platform Module (TPM) with a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software AWS inspec ts every appliance for any signs of tampering and to verify that no changes were detected by the TPM When the data transfer job has been processed and verified AWS performs a software erasure of the Snowball appliance that follows the National Institute of Standards and Technology (NIST) guidelines for media sanitization Snowball uses an innovative E Ink shipping label designed to ensure the appliance is automatically sent to the correct AWS facility and which also helps in tracking When you have completed your data transfer job you can track it by using Amazon SNS text messages and the console 6 – General Requirement Use a corporately approved and secured computer multi factor authentication a strong passphrase least access privileges and encrypted network traffic to administer (and if appropriate access) the cloud service AWS Response All of the AWS APIs are available via TLS protected endpoints that provide server authentication For more information on our region end points go here AWS requires that all API requests be signed —using a cryptographic hash function If you use any of the AWS SDKs to generate requests the digital signature calculation is done for you ; otherwise you can have your application calculate it and include it in your REST or Query requests by following the directions in our documentation Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 10 Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users Using IAM you can create and manag e AWS users and groups and use permissions to allow and deny their access to AWS resources To get started using IAM go to the AWS Management Console and get started with these IAM Best Practices You can set a password policy on your AWS account to spec ify complexity requirements and mandatory rotation periods for your IAM users' passwords You can use a password policy to do these things: • Set a minimum password length • Require specific character types including uppercase letters lowercase letters nu mbers and non alphanumeric characters Be sure to remind your users that passwords are case sensitive • Allow all IAM users to change their own passwords Note: When you allow your IAM users to change their own passwords IAM automatically allows them to v iew the password policy IAM users need permission to view the account's password policy in order to create a password that complies with the policy • Require IAM users to change their password after a specified period of time (enable password expiration) • Prevent IAM users from reusing previous passwords • Force IAM users to contact an account administrator when the user has allowed his or her password to expire Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 11 AWS Multi Factor Authentication (MFA) is a simple best practice that adds an extra layer of prot ection on top of your user name and password With MFA enabled when a user signs in to an AWS website they will be prompted for their user name and password (the first factor —what they know) as well as for an authentication code from their AWS MFA device (the second factor —what they have) Taken together these multiple factors provide increased security for your AWS account settings and resources You can enable MFA for your AWS account and for individual IAM users you have created under your account M FA can be also be used to control access to AWS service APIs After you've obtained a supported hardware or virtual MFA device AWS does not charge any additional fees for using MFA If you already manage user identities and MFA outside of AWS you can use IAM identity providers instead of creating IAM users in your AWS account With an identity provider (IdP) you can manage your user identities outside of AWS and give these external user identities permissions to use AWS resources in your account This is useful if your organization already has its own identity system such as a corporate user directory It is also useful if you are creating a mobile app or web application that requires access to AWS resources To use an IdP you create an IAM identity pro vider entity to establish a trust relationship between your AWS account and the IdP IAM supports IdPs that are compatible with OpenID Connect (OIDC) or SAML 2 0 (Security Assertion Markup Language 20) The following services are relevant in the enforcing use of corporate controlled computers: • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic When you launch a n instance in a VPC you can assign the instance to up to five security groups Security groups act at the instance level not the subnet level Therefore each instance in a subnet in your VPC could be assigned to a different set of security groups For e ach security group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 12 traffic For example you could restrict access to SSH and RDP ports to only your approved corporate IP ranges • A network access control list (ACL) is a recommended layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets You might set up network ACLs with rules similar to your security groups in order to add an additi onal layer of security to your VPC For more information about the differences between security groups and network ACLs see Comparison of Security Groups and Network ACLs • Permissions let you specify access to AWS resources Permissions are granted to IAM entities (users groups and roles) and by default these entities start with no permissions In other words IAM entities can do nothing i n AWS until you grant them your desired permissions To give entities permissions you can attach a policy that specifies the type of access the actions that can be performed and the resources on which the actions can be performed In addition you can s pecify any conditions that must be set for access to be allowed or denied To assign permissions to a user group role or resource you create a policy that lets you specify: o Actions – Which AWS actions you allow For example you might allow a user to call the Amazon S3 ListBucket action Any actions that you don't exp ressly allow are denied o Resources – Which AWS resources you allow the action on For example what Amazon S3 buckets will you allow the user to perform the ListBucket action on? Users cannot access any resources that you do not explicitly grant permissions to o Effect – Whether to allow or deny access Because access is denied by default you typically write policies where the effect is to allow o Conditions – Which conditions must be present for the policy to take effect For example you might allow access only to the specific S3 buckets i f the user is connecting from a specific IP range or has used multi factor authentication at login For an example o f this policy go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 13 7 – General Requirement Protect authentication credentials eg avoid exposing Application Programming Interface (API) authentication keys placed on insecure computers or in the source code of software that is accessible to unauth orised third parties AWS Response When you access AWS programmatically you use an access key to verify your identity and the identity of your applications An access key consists of an access key ID (something like AKIAIOSFODNN7EXAMPLE) and a secret acce ss key (something like wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) Anyone who has your access key has the same level of access to your AWS resources that you do Consequently AWS goes to significant lengths to protect your access keys and in keeping with our shared responsibility model you should as well The following steps can help you protect access keys For general background see AWS Security Credentials Note: Your organization may have different security requirements and policies than those described in this topic The suggestions provided here are intended to be general guidelines • Remove (or Don't Generate) a Root Account Access Key One of the best ways to protect your account is to not have an access key for your root account Unless you must have a root access key (which is very rare) it is best not to generate one Instead the recommended best practice is to create one or more AWS Identity and Access Management (IAM) users give them the necessary permiss ions and use IAM users for everyday interaction with AWS • Use Temporary Security Credentials (IAM Roles) Instead of Long Term Access Keys In ma ny scenarios you don't need a long term access key that never expires (as you have with an IAM user) Instead you can create IAM Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 14 roles and generate temporary security credentials Temporary security credentials consist of an access key ID and a secret access key but they also include a security token that indicates when the credentials expire • Manage IAM User Access Keys Properly If you do need to create access keys for programmatic access to AWS create an IAM user and grant that user only the permissions he or she needs Then generate an access key for that user For details see Managing Access Keys for IAM Users in IAM User Guide Observe these precautions when using access keys: o Don't embed access keys directly into code o Use different access keys for different applications o Rotate access keys periodically o Remove unused access keys o Configure multifactor authentication for your most sensitive operations • More Resources You can also leverage AWS Trusted Advisor checks as part of your Security monitoring AWS Trusted Advisor provides best practices in four categories: • Cost Optimization • Security • Fault Tolerance • Performance Improvement The complete list of over 50 Trusted Advisor checks available with business and enterprise support plans can be used to monitor and improve the deployment of Amazon EC2 Elastic Load Balancing Amazon EBS Amazon S3 Auto Scaling AWS Identity and Access Management Amazon RDS Amazon Redshift Amaz on Route 53 CloudFront and CloudTrail You can view the overall status of your AWS resources and savings estimations on the Trusted Advisor dashboard Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 15 One of the Trusted Advisor checks is for exposed Access Keys This checks popular code repositories for access keys that have been exposed to the public and for irregular Amazon Elastic Compute Cloud (Amazon EC2) usage that could be the result of a compromised access key An access key consists of an access key ID and the corresponding secret access key Ex posed access keys pose a security risk to your account and other users could lead to excessive charges from unauthorized activity or abuse and violate the AWS Customer Agreement If your access key is exposed take immediate action to secure your account To additionally protect your account from excessive charges AWS temporarily limits your ability to create some AWS resources This does not make your account secure; it only partially limits the unauthorized usage for which you could be charged Note: This check does not guarantee the identification of exposed access keys or compromised EC2 instances You are ultimately responsible for the safety and security of your access keys and AWS resources 8 – General Requirement Obtain and promptly analyse detai led time synchronised logs and real time alerts for the T enant’s cloud service accounts used to access and especially to administer the cloud service AWS Response AWS CloudTrail is a web service that rec ords AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements return ed by the AWS service With CloudTrail you can get a history of AWS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as AWS CloudFormation) The AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 16 To maintain the integrity of your log data it is important to carefully manage access around the generation and storage of your log files The ability to view or modify your log data should be restricted to authorized users A common log related challenge for on premise environments is the ability to demonstrate to regulators that access to log data is restricted to authorized users This cont rol can be time consuming and complicated to demonstrate effectively because most on premise environments do not have a single logging solution or consistent logging se curity across all systems With AWS CloudTrail access to Amazon S3 log files is centrally controlled in AWS which allows you to easily control access to your log files and help demonstrate the integrity and confidentiality of your log data This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements 9 – General Requirement Obtain and promptly analyse detailed time synchronised logs and real time alerts generated by the cloud service used by the tenant eg operating system web server and application logs AWS Response You can execute Continuous Monitoring of logical controls on your own systems You assume the responsibility and management of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall In addition to the monitoring services th at AWS provides you can leverage most OS level and application monitoring tools that you have used in traditional on premise deployments Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources as well as cu stom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 17 to gain system wide visibility into resource utilization application performance and operational health You can use the se insights to react and keep your application running smoothly CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system appl ication and custom log files With CloudWatch Logs you can monitor your logs in near real time for specific phrases values or patterns (metrics) For example you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs You can view the original log data to see the source of the problem if needed Log data can be stored and accessed for as long as you need using highly durable low cost storage so you don’t have to wor ry about filling up hard drives You can use Amazon CloudWatch Logs to monitor store and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances AWS CloudTrail or other sources You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console the CloudWatch Logs commands in the AWS CLI the CloudWatch Logs API or the CloudWatch Logs SDK You can use CloudWatch Logs to: • Monitor Logs from Amazon EC2 Instances in Real time • Monitor AWS CloudTr ail Logged Events • Archive Log Data 10 – General Requirement Avoid providing the CSP with account credentials (or the ability to authorise access) to sensitive systems outside of the CSP’s cloud such as systems on the tenant’s corporate network AWS Response AWS does not request that you disclose your customer passwords in order to provide the services or support AWS provides infrastructure and you manage everything else including the operating system the network configuration and Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 18 the insta lled applications You control your own guest operating systems software and applications When you launch an instance you should specify the name of the key pair you plan to use to connect to the instance If you don't specify the name of an existing key pair when you launch an instance you won't be able to connect to the instance When you connect to the instance you must specify the private key that corresponds to the key pair you specified when you launched the instance Amazon EC2 doesn't keep a co py of your private key; therefore if you lose a private key there is no way to recover it 11 – General Requirement Use multi tenancy mechanisms provided by the CSP eg to separate the tenant’s web application and network traffic from other tenants use the CSP’s hypervisor virtualisation instead of web server software virtual hosting AWS Response Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon EC2 reduces the time required to obtain and boot new server instances to minutes allowing you to quickly scale capacity both up and down as your computing require ments change The AWS environment is a virtualized multi tenant environment Customer can also select dedicated Amazon EC2 instances which are single tenant AWS has implemented security management processes PCI controls and other security c ontrols designed to isolate each customer from other customers AWS systems are designed to prevent you from accessing physical hosts or instances not assigned to you by filtering through the virtualization software This architecture has been validated by an independent PCI Qualified Security Assessor (QSA) and was found to be in compliance with all requirements of PCI DSS version 31 published in April 2015 Note : AWS also has single tenancy options Dedicated Instances are Amazon EC2 instances launched w ithin your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer Dedicated Instances Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 19 let you take full advantage of the benefits of Amazon VPC and the AWS cloud while isolating your Amazon EC2 compute instances at the hardware level 12 – General Requirement Perform up todate encrypted backups in a format avoiding CSP lock in stored offline at the tenant’s premises or at a second CSP requiring multi factor authentication to modify/delete data Annually test the recovery process AWS Response You retain control and ownership of your content and it is your responsibility to manage your data backup plans You can export your EC2 instance image (an EC2 instance image in AWS is referred to as an Amazon Machine Image A MI) and use it on premise or at another provider (subject to software licensing restrictions) For more information see Introduction to AWS Security Processes AWS supports several methods for loading and retrieving data including: the public Internet; a direct network connection with AWS Direct Connect; the AWS Import/Export service where AWS will import data into S3; and for backups of application data the AWS St orage Gateway helps you backup your data to AWS AWS allows you to move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport AWS allows you to perform your own backups to tapes using your own tape backup service provider However a tape backup is not a service provided by AWS Amazon S3 service is designed to drive the likelihood of data loss to near zero percent and the durab ility equivalent of multi site copies of data objects is achieved through data storage redundancy Amazon S3 provides a highly durable storage infrastructure Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Re gion Once stored Amazon S3 maintains the durability of objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 20 verifies the integrity of data stored using checksums If corruption is detected it is repaired using redunda nt data Data stored in S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year AWS allows you to us your own encryption mechanisms t o encrypt backups for nearly all the servi ces including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted Amazon S3 also offers you Server Side Encryption as an option You can also use third party encryption technologies The AWS CloudHSM service allows you to protect your encryption keys within HSMs designed and validated to government standards for secure key management You can securely generate store and manage the cryptographic keys used for data encryption such that they are accessible only by you AWS CloudHSM helps you comply with strict key manageme nt requirements without sacrificing application performance AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses Hardware Security Modules (HSMs) to protect the security of your keys AWS Key Management Service is integrated with several other AWS services to help you protect your data you store with these services AWS Key Management Service is also integrated with AWS CloudTrail to provide you with l ogs of all key usage to help meet your regulatory and compliance needs 13 – General Requirement Contractually retain legal ownership of tenant data Perform a due diligence review of the CSP’s contract and financial viability as part of assessing privacy and legal risks AWS Response You retain control and ownership of your data AWS only uses your content to maintain or provide the AWS services that you have selected or to comply with the law or a binding legal government request AWS treats all custome r content the same and has no insight as to what type of content that you choose to store in AWS AWS simply makes available the Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 21 compute storage database and networking services that you select See https ://awsamazoncom/agreement/ for further information AWS errs on the side of protecting your privacy and is vigilant in determining which law enforcement requests we must comply with AWS does not hesitate to challenge orders from law enforcement if we think the orders lack a solid basis Further legal information is available at this site https://awsamazoncom/legal/ 14 – General Requirement Implement adequately high bandwidth low latency reliable networ k connectivity between the tenant (including the tenant’s remote users) and the cloud service to meet the tenant’s availability requirements AWS Response You can choose your network path to AWS facilities including multiple VPN endpoints in each AWS Regi on In addition AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment w hich in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internet based connections Refer to AWS Overview of Security Processes Whitepaper for additional details available at http://awsamazoncom/security AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021q VLANs thi s dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 in stances running within an Amazon Virtual Private Cloud (VPC) using private IP space while maintaining network separation between the public and private environments Virtual interfaces can be reconfigured at any time to meet your changing needs Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 22 Network latency over the Internet can vary given that the Internet is constantly changing how data gets from point A to B With AWS Direct Connect you choose the data that utilizes the dedicated connection and how that data is routed which can provide a more consistent network experience over Internet based connections AWS Direct Connect makes it easy to scale your connection to meet your needs AWS Direct Connect provides 1 Gbps and 10 Gbps connections and you can easil y provision multiple connections if you need more capacity You can also use AWS Direct Connect instead of establishing a VPN connection over the Internet to your Amazon VPC avoiding the need to utilize VPN hardware that frequently can’t support data tran sfer rates above 4 Gbps 15 – General Requirement Use a cloud service that meets the tenant’s availability requirements Assess the Service Level Agreement penalties and the number severity recency and transparency of the CSP’s scheduled and unschedule d outages AWS Response AWS commit s to high levels of availability in its service level agreements (SLAs) For example Amazon EC2 commits to annual uptime percentage of at least 9995% during the service year Amazon S3 commits to monthly upt ime percentage of at least 999 % Service credits are provided in the case these availability metrics are not met See https://awsamazoncom/legal/service level agreements/ For many servi ces AWS can perform regular maintenance and system patching without rendering the service unavailable or requiring reboots AWS’ own maintenance and system patching generally do not impact you You control maintenance of the instances themselves AWS publ ishes our most up totheminute information on service availability on the Service Health Dashboard Amazon Web Services keeps a running log of all service interruptions that we publish for the past year Refer to http://statusawsamazoncom Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 23 You should architect your AWS usage to take advantage of multiple Regions and Availability Zones Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes including natural disasters or system failures AWS utilizes automated monitoring systems to provide a high level of service performance and availability Proactive monitoring is available through a variety of online tools both for internal a nd external use Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call schedu le is used such that personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel AWS Network Management is regularly reviewed by independent third party auditors as a part of AWS ongoing compliance with SOC PCI DSS ISO 27001 and FedRAMP 16 – General Requirement Develop and annually test a disaster recovery and business continuity plan to meet the tenant’s availability requirements eg where feasibl e for simple architectures temporarily use cloud services from an alternative CSP AWS Response You retain control and ownership of your data AWS provides you with the flexibility to place instances and store data within multiple geographic regions as we ll as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In case of failure automated processes move your data traffic away from the affected area AWS SOC reports provides further det ails ISO 27001 standard Annex A domain 15 provides additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification Using AWS you can enable faster disaster recovery of your critical IT systems without incurring the infrastructure expense of a second physical site The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover For more information about Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 24 Disaster Recovery on AWS see the Disaster Recovery website and Disaster Recovery whitepaper AWS provides you with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/ava ilability zone deployment architectures You can place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Each Availability Zone is designed as an independent failure zone In case of failure automated processes move customer data traff ic away from the affected area AWS data centers incorporate physical protection to mitigate against environmental risks AWS’ physical protection against environmental risks has been validated by an independent auditor and has been certified as being in alignment with ISO 27002 best practices Refer to ISO 27001 standard Annex A domain 9 and the AWS SOC 1 Type II report for additional information You retain control and ownership of your content and it is your responsibility to manage your data backup plans You move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport VM Impor t/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on premises environment This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances You can also export imported instances back to your on premises virtualizatio n infrastructure allowing you to deploy workloads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 See https://awsamazoncom/ec2/vm import/ for further information Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 25 17 – General Requirement Manage the cost of a genuine spike in demand or denial of service via contractual spending limits denial of service mitigation services and judicious use of the CSP’s infrastructure capacity eg limits on automated scaling AWS Response To help guarantee availability of AWS resources as well as minimize billing risk for new customers AWS maintains service limits for each account Some service limits are raised automatically as you build a history with AWS though most AWS services require that you request limit increases manually For a list of the default limits for each service as well as how to request a service limit increase see AWS Service Limits Note : Most limits are specific to a particular AWS region so if your use case requires higher limits in multiple regions file separate limit increase requests for each region you plan to use To avoid exceeding service limits while building or scaling your application you can use the AWS Trusted Advisor Service Limits check to monitor some limits For a list of limits that are included in the Trusted Advisor check see Service Limit s Check Questions EC2 has a service specific limits dashboard that can help you manage your instance EBS and Elastic IP limits For more information about EC2's Limits dashboard see Amazon EC2 Service Limits For more information about service limits go here You can also monitor your AWS costs by using CloudWatch With CloudWatch you can create billing alerts that notify you when your usage of your services exceeds thresholds that you define You specify these threshold amounts when you create the billing alerts When your usage exceeds these amounts A WS sends you an email notification You can also sign up to receive notifications when AWS prices change For more information go here Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 26 Cost Explorer is a fr ee tool that you can use to view graphs of your costs (also known as spend data) for up to the last 13 months and forecast how much you are likely to spend for the next three months You can use Cost Explorer to see patterns in how much you spend on AWS r esources over time identify areas that need further inquiry and see trends that you can use to understand your costs You can also specify time ranges for the data you want to see and you can view time data by day or by month For example you can use Cost Explorer to see which service you use the most which Availability Zone (AZ) most of your traffic is in which linked account uses AWS the most and more For more information go here Within the Cost Explorer tool a budget is a way to plan your costs (also known as spend data) and to track how close your costs are to exceeding your budgeted amount Budgets use data from Cost Explorer to provide y ou with a quick way to see your estimated charges from AWS and to see how much your predicted usage will accrue in charges by the end of the month Budgets also compare the estimated charges to the amount that you want to spend and lets you see how much of your budget has been spent Budgets are updated every 24 hours Budgets track your unblended costs and subscriptions but do not track refunds AWS does not use your forecasts to create a budget for you You can create budgets for different types of cos t For example you can create a budget to see how much you are spending on a particular service or how often you call a particular API operation Budgets use the same data filters as Cost Explorer For more information go here Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down aut omatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns or that experience hourly daily or weekly variability in usage You can specify the maximum number of instances in each Auto Scaling group and Auto Scaling ensures that your group never goes above this size Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 27 Distributed denial of service (DDoS) attacks are sometimes used by malicious actors in an attempt to flood a network system or application with more traffic connections or requests than it can handle Not surprisingly customers often ask us how we can help them protect their applications against these types of attacks To help you optimize for availability AWS provides best practices that allow you to use the scale of AWS to build a DDoS resilient architecture IaaS Risk Mitigations 1 – IaaS Requirement Securely configure harden and maintain VMs with host based security controls eg firewall intrusion prevention system logging ant ivirus software and prompt patching of all software that the tenant is responsible for AWS Response You retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of your own systems Regularly patch update and secure the operating system and applications on your instance For more information about updating Amazon Linux see Managing Software on Your Linux Instance For more information about updating your Windows instance see Updating Your Windows Instance in the Amazon EC2 User Guide for Microsof t Windows Instances Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default deny ‐all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be rest ricted by protocol by service port as well as by source IP address (individual IP or Classless Inter ‐Domain Routing (CIDR) block) AWS further encourages you to apply additional per ‐instance filters with host ‐based firewalls such as IPtables or the Windo ws Firewall and VPNs This can restrict both inbound and outbound traffic AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application ava ilability compromise security or consume excessive resources AWS WAF gives you Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 28 control over which traffic to allow or block to your web applications by defining customizable web security rules You can use AWS WAF to create custom rules that block commo n attack patterns such as SQL injection or cross site scripting and rules that are designed for your specific application New rules can be deployed within minutes letting you respond quickly to changing traffic patterns Also AWS WAF includes a full featured API that you can use to automate the creation deployment and maintenance of web security rules With AWS WAF you pay only for what you use AWS WAF pricing is based on how many rules you deploy and how many web requests your web application receives There are no upfront commitments This paper provides AWS best practices for DDoS resiliency https://d0awsstaticcom/whitepapers/DDoS_White_Paper_June2015pdf AWS Elastic Beanstalk is an easy touse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk regularly releases updates for the Linux and Windows Server based platforms that run applications on an Elastic Beanstalk environment A platform consists of a software component (an AMI running a specific version of an OS tools and Elastic Beanstalk specific scripts) and a configuration component (the default settings applied to environments created with the platform) New platform versions provide updates to existing software components and support for new fea tures and configuration options With managed platform updates you can configure your environment to automatically upgrade to the latest version of a platform during a scheduled maintenance window Your application remains in service during the update process with no reduction in capacity You can configure your environment to automatically apply patch version updates or both patch and minor version updates Managed platform updates don't support major version updates which may introduce changes that are backwards incompatible Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 29 2 – IaaS Requirement Use a corporately approved and secured computer to administer VMs requiring access from the tenant’s IP address encrypted traffic and a SSH/RDP PKI key pair protected with a strong passphrase AWS Response Amazon VPC offers a wide range of tools that give you more control over your AWS infrastructure Within a VPC you can define your own network topology by defining subnets and routing tables and you c an restrict access at the subnet level with network ACLs and at the resource level with VPC security groups You can isolate your resources from the Internet and con nect them to your own data center through a VPN You can assign elastic IP addresses to some instances and connect them to the public Internet through an Internet gateway while keeping the rest of your infrastructure in private subnets VPC makes it easier to protect your AWS resources while you keep the benefits of AWS with regards to flexibility scalab ility elasticity performance availability and the pay asyou use pricing model You can add or remove rules for a secu rity group (also referred to as authorizing or revoking inbound or outbound access) A rule applies either to inbound traffic (ingress) or outbound traffic (egress) You can grant access to a specific CIDR range or to another security group in your VPC or in a peer VPC (requires a VPC p eering connection) For example by leveraging part of your organisation’s public IP address range you could limit inbound SSH and RDP access to be allowed only from your network (via the VPC Internet Gateway) Similarly if a VPN or Direct Connect connecti on to the VPC is in place you could limit SSH and RDP access to only a section of your organisation’s private IP range You can connect your VPC to remote networks by using a VPN connection The following are some of the connectivity options available to you • AWS Hardware VPN (VPC VPG) • AWS Direct Connect • Software VPN Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 30 Amazon EC2 uses public –key cryptography to encrypt and decrypt login information Public –key cryptography uses a public key to encrypt a piece of data such as a password then the recipient u ses the private key to decrypt the data The public and private keys are known as a key pair To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SSH With Windows instances you use a key pair to obtain the administrator password and then log in using RDP You can use Amazon EC2 to create yo ur key pair this will create a 2048 bit SSH 2 RSA keys For more information see Creating Your Ke y Pair Using Amazon EC2 Alternatively you could use a third party tool and then import the public key to Amazon EC2 For more information see Importing Your Own Key Pair to Amazon EC2 Amazon EC2 stores the public key only and you store the private key Anyone who possesses your private key can decrypt your login infor mation so it's important that you store your private keys in a secure place Amazon EC2 accepts the following formats: • OpenSSH public key format (the format in ~/ssh/authorized_keys) • Base64 encoded DER format • SSH public key file format as specified in RFC4716 Amazon EC2 does not accept DSA keys Make sure your key generato r is set up to create RSA keys Supported lengths: 1024 2048 and 4096 3 – IaaS Requirement Only use VM template images p rovided by trusted sources to help avoid the accidental or deliberate presence of malware and backdoor user accounts Protect the tenant’s VM template images from unauthorised changes Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 31 AWS Response An Amazon Machine Image (AMI) provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need You can also launch instances from as many different AMIs as you need You can customize the instance that you launch from a public AMI and then save that configuration as a custom AMI for your own use Instances that you launch from your AMI use all the customizations that you've made You can also use custom AMI instances with A WS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating t hem in an orderly and predictable fashion After you create an AMI you can keep it private so that only you can use it or you can share it with a specified list of AWS accounts You can also make your custom AMI public so that the community can use it Building a safe secure usable AMI for publ ic consumption is a fairly straightforward process if you follow a few simple guidelines For information about how to create and use shared AMIs see Sh ared AMIs You also control the updating and patching of your guest OS including security updates Amazon ‐provided Windows and Linux ‐based AMIs are updated regularly with the latest patches so if you do not need to preserve data or customizations on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on premises environment This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security configuration management and compliance requirements by bringing those virtual machines into Amazon EC2 as ready touse instances You can also export imported instances back to your on premises virtualization infrastructure allowing you to deploy workl oads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 32 The Center for Internet Security Inc (CIS) is a 501c3 nonprofit organization focused on enhancing the cy ber security readiness and response of public and private sector entities with a commitment to excellence through collaboration CIS provides resources that help partners achieve security goals through expert guidance and cost effective solutions CIS pro vide preconfigured AMI’s on the AWS Marketplace here: https://awsamazoncom/marketplace/seller profile/ref=dtl_pcp_sold_b y?ie=UTF8&id=6b3b0dc2 c6f4 487b 8f29 9edba5f39eed Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS Amazon Inspector automatically assessed applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of security 4 – IaaS Requirement Implement netw ork segmentation and segregation eg n tier architecture using host based firewalls and CSP’s network access controls to limit inbound and outbound VM network connectivity to only required ports/protocols AWS Response Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selec tion of your own IP address range creation of subnets and configuration of route tables and network gateways You can easily customize the network configuration for your Amazon Virtual Private Cloud For example you can create a public facing subnet for your webservers that has access to the Internet and place your backend systems such as databases or application servers in a private facing subnet with no Internet access You can leverage multiple layers of security including security groups and networ k access control lists to help control access to Amazon EC2 instances in each subnet Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 33 Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extensio n of your corporate datacenter A security group acts as a virtual firewall for your instance to control inbound and outbound traffic When you launch an instance in a VPC you can assign the instance to up to five security groups Security groups act at the instance level not the subnet level Therefore each instance in a subnet in your VPC could be assigned to a different set of security groups If you don't specify a particular group at launch time the instance is automatically assigned to the default security group for the VPC For each security group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic This section describes the basics things you need to know about security grou ps for your VPC and their rules The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications Well ‐ informed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional per ‐instance filters with host ‐based firewalls such as IPtables or the Windows Firewall and VPNs This can restrict both inbound and outbound traffic A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC For more information about the differences between security groups and network ACLs see Comparison of Security Groups and Network ACLs 5 – IaaS Requirement Utilise secure programming practices for software developed by the tenant AWS Response It is your responsibility to u se secure programming practices Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 34 AWS’s development process for AWS infrastructure and services follows secure software development best practices whic h include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring p enetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoin g operations This whitepaper describes how Amazon Web Services (AWS) adds value in the various phases of the software development cycle with specific focus on development and test For the development phase it shows how to use AWS for managing version control; it describes project management tools the build process and environments hosted on AWS; and it illustrates best practices For the test phase it describes how to manage test environments and run various kinds of tests including load testing acceptance testing fault toler ance testing etc AWS provides unique advantages in each of these scenarios and phases allowing you to pick and choose the ones most appropriate for your software development project The intended audiences for this paper are project managers developers testers systems architects or anyone involved in software production activities With AWS your development and test teams can have their own resources scaled according to their own needs Provisioning complex environments or platforms composed of mul tiple instances can be done easily using AWS CloudFormation stacks or some of the other automation techniques described In large organizations comprising multiple teams it is a good practice to create an internal role or service responsible for centraliz ing and managing IT resources running on AWS This role typically consists of: • Promoting internal development and test practices described here • Developing and maintaining template AMIs and template AWS CloudFormation stacks with the different tools and p latforms used in your organization • Collecting resource requests from project teams and provisioning resources on AWS according to your organization’s policies including network configuration (eg Amazon VPC) security configurations (eg Security Gr oups and IAM credentials) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 35 • Monitoring resource usage and charges using Amazon CloudWatch and allocating these to team budgets While you can use the AWS Management Console to achieve the tasks above you might want to develop your own internal provisionin g and management portal for a tighter integration with internal processes You can do this by using one of the AWS SDKs which allow programmatic access to resources running on AWS 6 – IaaS Requirement Architect to meet availability requirements eg minimal single points of failure data replication automated failover multiple availability zones geographically separate data centres and real time availability monitoring AWS Response AWS provides you with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/availability zone deployment architectures The AWS Well Architected Framework whitepaper describes how you can assess and improve your cloud based architectures to better understand the business impact of your design decisions Included in the paper are the four general design principles as w ell as specific best practices and guidance in four conceptual areas (security reliability performance efficiency and cost optimization) These four areas are defined as the pillars of the Well Architected Framework AWS provides you with the flexibilit y to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region You should architect your AWS usage to take advantage of multiple Regions and Availability Zones The Architecting for the Cloud whitepaper Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in your AWS resources Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 36 as custom m etrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these ins ights to react and keep your application running smoothly This whitepaper is intended for solutions architects and developers who are building solutions that will be deployed on Amazon Web Services (AWS) It provides architectural patterns and advice on how to design systems that are secure reliable high performing and cost efficient It includes a discussion on how to take advantage of attributes that are specific to the dynamic nature of cloud computing (elasticity infrastructure automation etc) In addition this whitepaper also covers general patterns explaining how t hese are evolving and how they are applied in the context of cloud computing 7 – IaaS Requirement If high availability is required implement clustering and load balancing a Content Delivery Network for public web content automated scaling with an adequ ate maximum scale value and real time availability monitoring AWS Response Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Achieve higher levels of fault tolerance for your a pplications by using Elastic Load Balancing to automatically route traffic across multiple instances and multiple Availability Zones Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances If all of your EC2 instances in one Availability Zone are unhealthy and you have set up EC2 instances in multiple Availability Zones Elastic Load Balancing will route traffic to your healthy EC2 instances in those other zones Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 37 Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns or that experience hourly daily or weekly variability in usage Whether you are running one Amaz on EC2 instance or thousands you can use Auto Scaling to detect impaired Amazon EC2 instances and unhealthy applications and replace the instances without your intervention This ensures that your application is getting the compute capacity that you expe ct Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service You can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Elastic Load Balancers Route 53 will fail away from a load balancer if there are no healthy EC2 instances registered with the load balancer or if the load balancer itself is unhealthy Using Route 53 DNS failover you can run applications in multiple AWS regions and designate alternate load balancers for failover across regions In the event that your application is unresponsive Route 53 will remove t he unavailable load balancer endpoint from service and direct traffic to an alternate load balancer in another region To get started with Route 53 failover for Elastic Load Balancing visit the Elastic Load Balancing Developer Guide and the Amazon Route 53 Developer Guide Amazon CloudFront is a global content delivery network (CDN) service It integrates with other Amazon Web Services products to give developers and businesses an easy way to distribute content to end users with low latency high data transfer speeds and no minimum usage commitments The service automatically responds as demand increases or decreases without any intervention from you Amazon CloudFront also uses multiple layers of caching at each edge location and collapses simultaneous requests for the same object before contacting your origin server These optimizations further help reduce the need to scale your origin infrastructure as your website becomes more popular Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 38 Amazon CloudFront is built using Amazo n’s highly reliable infrastructure The distributed nature of edge locations used by Amazon CloudFront automatically routes end users to the closest available location as required by network conditions Origin requests from the edge locations to AWS origin servers (eg Amazon EC2 Amazon S3 etc) are carried over network paths that Amazon constantly monitors and optimizes for both availability and performance AWS WAF is a web application firewall that helps pr otect your web applications from common web exploits that could affect application availability compromise security or consume excessive resources AWS WAF gives you control over which traffic to allow or block to your web applications by defining custom izable web security rules You can use AWS WAF to create custom rules that block common attack patterns such as SQL injection or cross site scripting and rules that are designed for your specific application New rules can be deployed within minutes let ting you respond quickly to changing traffic patterns Also AWS WAF includes a full featured API that you can use to automate the creation deployment and maintenance of web security rules With AWS WAF you pay only for what you use AWS WAF pricing is b ased on how many rules you deploy and how many web requests your web application receives There are no upfront commitments PaaS Risk Mitigations 1 – PaaS Requirement Securely configure and promptly patch all software that the tenant is responsible for AWS Response While AWS provides a managed service you are responsible for setting up and managing network controls such as firewall rules and for managing platform level identity and access management separately from IAM AWS is responsible for patching systems supporting the delivery of service to custom ers This is done as required per AWS policy and in accordance with ISO 27001 NIST and PCI requirements AWS manages the underlying Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 39 infrastructure and foundation services the operating system and the application platform Elastic Beanstalk regularly re leases platform updates to provide fixes software updates and new features With managed platform updates you can configure your environment to automatically upgrade to the latest version of a platform during a scheduled maintenance window Your application remains in service during the update process with no reduction in capacity You can configure your environment to automatically apply patch version updates or both patch and minor version updates Managed platform updates don't support major version updates which may introduce changes that are backwards incompatible When you enable managed platform updates you can also configure AWS Elastic Beanstalk to replace all instances in your environment during the maintenance window even if a platform update isn't available Replacing all instances in your environment is helpful if your application encounters bugs or memory issues when running for a long period 2 – PaaS Requirement Utilise secure programming practi ces for software developed by the tenant AWS Response Covered in 5 IaaS 3 – PaaS Requirement Architect to meet availability requirements eg minimal single points of failure data replication automated failover multiple availability zones geographical ly separate data centres and real time availability monitoring AWS Response Covered in 6 IaaS Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 40 4 – PaaS Requirement If high availability is required implement clustering and load balancing a Content Delivery Network for public web content automated scaling with an adequate maximum scale value and real time availability monitoring AWS Response Covered in 7 IaaS SaaS Risk Mitigations 1 – SaaS Requirement Use security controls specific to the cloud service eg tokenisation to replace sensitive data with non sensitive data or ASD approved encryption of data (not requiring processing) and avoid exposing the decryption key AWS Response AWS provides specific SOC controls to address the threat of inappropriate access and the public certification and compliance initiatives covered in this document address efforts to prevent inappropriate access All certifications and third party attestations evaluate logical access preventative and detective controls In addition periodic risk assessments focus on ho w access is controlled and monitored AWS allows you to implement yo ur own security architecture For more information about server and network security see the AWS security whitepaper All data stored by AWS on behalf of you has strong tenant isolation security a nd control capabilities You retain control and ownership of your data thus it is your responsibility to choose to encrypt the data AWS allows you to use your own encryption mechanisms for nearly all of the AWS services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition you can leverage AWS Key Management Systems (KMS) to create and control encryption keys using 256 bit AES envelope encryption (refer to https://awsamazoncom/kms/ ) Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 41 2 – SaaS Requirement If high availability is required where possible and appropriate implement additional cloud services providing layered denial of service mitigation where these cloud services might be provided by third party CSPs AWS Response Covered in 7 IaaS Additionally the A WS Best Practices for DDoS Resiliency whitepaper provides guidance on how you can improve the resiliency of your applications running on Amazon Web Services (AWS) against Distributed Denial of Service attacks The paper provides an overview of Distributed Denial of Service attacks techniques that can help maintain availability and reference architectures to provide architectural guidance with the goal of improving your resiliency Further Reading For additional help see the following sources: • AWS Security Page: http://awsamazoncom/security • AWS Compliance Page: http://awsamazoncom/compliance • AWS IRAP Page: http://awsamazoncom/compliance/irap/ • Overview of AWS Security Processes: http://d0awsstaticcom/whitepapers/Security/AWS_Security_Whitepa perpdf • AWS Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Co mpliance_Whitepaperpdf • AWS Security Best Practices: https://d0awsstaticcom/whitepapers/aws security best practicespdf • KMS Cryptographic Details: https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf Amazon Web Services – Understanding the ASD’s Cloud Computing Security for Tenants in the Context of AWS Page 42 Document Revisions Date Description June 2017 Initial publication
|
General
|
consultant
|
Best Practices
|
Use_Amazon_Elasticsearch_Service_to_Log_and_Monitor_Almost_Everything
|
This paper has been archived For the latest version of this content visit: https://d1awsstaticcom/architecturediagrams/ArchitectureDiagrams/observabilitywith logstracesmetricsrapdf Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything First published December 2016 Updated July 13 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 What Is Elasticsearch? 2 How Is Elasticsearch used? 5 What about commercial monitoring tools? 6 Why use Amazon ES? 7 Best practices for configuring your Amazon ES domain 8 Elasticsearch Security and Compliance 9 Security 9 Compliance 10 MultiAccount Log aggregation use case 11 UltraWarm storage for Amazon ES 12 Pushing Log data from EC2 instances into Amazon ES 13 Pushin g Amazon CloudWatch Logs into Amazon ES 14 Using AWS Lambda to send Logs into Amazon ES 16 Using Amazon Kinesis Data Firehose to load data into Amazon ES 18 Implement Kubernetes logging with EFK and Amazon ES 19 Settin g up Kibana to visualize Logs 20 Alerting for Amazon ES 20 Other configuration options 20 Conclusion 21 Contributors 21 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and many more use cases It is a fully managed service that delivers the easy touse APIs and realtime capabilities of Elasticsearch along with the availability scalability and security required by production workloads Amazon ES is a service designed to be useful for logging and monitoring It is fully managed by Amazon Web Services (AWS) and offers com pelling value relative to its cost of operation This whitepaper provide s best practices for feeding log data into Elasticsearch and visualizing it with Kibana using a serverless inbound log management approach It show s how to use Amazon CloudWatch Logs and the unified Amazon CloudWatch Logs agent to manage inbound logs in Amazon Elasticsearch You can use this approach instead of the more traditional ELK Stack (Elasticsearch Logstash Kibana) approach It also show s you how to move log data into Amazon ES using Amazon Kinesis Data Firehose – and identifies the strengths and weaknesses of using Kinesis versus the simpler CloudWatch approach while providing tips and techniques for easy setup and management of the solution To get the most out of reading this whitepaper it’s helpful to be familiar with AWS Lambda functions Amazon Simple Storage Service ( Amazon S3) and AWS Identity and Access Management (IAM) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 1 Introduction AWS Cloud implementations differ significantly from on premises infrastructure New log sources the volume of logs and the dynamic nature of the cloud introduce new logging and monitoring challenges AWS provides a range of services that help you to meet thos e challenges For example AWS CloudTrail captures all API calls made in an AWS account Amazon Virtual Private Cloud (Amazon VPC) Flow Logs capture network traffic inside an Amazon VPC and both containers and EC2 instances can come and go in an elastic f ashion in response to AWS Auto Scaling events Many of these log types have no direct analogy in the on premises data center world This whitepaper explains how to use Amazon Elasticsearch Service (Amazon ES) to ingest index analyze and visualize logs p roduced by AWS services and your applications without increasing the burden of managing or monitoring these systems Elasticsearch and its dashboard extension called Kibana are popular open source tools because they are simple to use and provide a quick time to value Additionally the tools are fully supported by AWS Support as well as by an active open source community With the m anaged Amazon ES service AWS reduces the effort required to set up and configure a search domain by creating and managing a multi node Elasticsearch cluster in an automated fashion replacing failed nodes as needed The domain is the searchable interface for Amazon ES and the cluster is the collection of managed compute nodes needed to power the system AWS currently supports versions of Elasticsearch and Kibana from 15 to 710 At the date of this writing the new 6x and 7x versions of Elasticsearch and Kibana offer several new features and improvements including UltraWarm real time anomaly detection index splitting weighted average aggregation higher indexing performance improved cluster coordination safeguards and an option to multiplex token filters support for field aliases and improved workflow for inspecting the data behind a visualization You can create new domains running Elasticsearch 710 and also easily upgrade existing 56 and 6x domains with no downtime using inplace version upgrades You can easily scale your cluster with a single API call and configure it to meet your performanc e requirements by selecting from a range of instance types and storage options including solid state drive (SSD) backed EBS volumes Amazon ES provides This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticse arch Service to Log and Monitor (Almost) Everything 2 high availability using zone awareness which replicates data among three Availability Zones Amazon ES can be scaled up from a default limit of 20 data nodes to 200 data nodes in a single cluster including up to 3 petabytes of storage by requesting a service limit increase By taking advantage of Amazon ES you can concentrate on getting value from the data that is indexed by your Elasticsearch cluster and not on managing the cluster itself You can use AWS tools settings and agents to push data into Amazon ES Then you can configure Kibana dashboards to make it easy to understand interesting correlat ions across multiple types of AWS services and application logs Examples include VPC networking logs application and system logs and AWS API calls Once the data is indexed you can access it via an extensible simple and coherent API using a simple q uery domain specific language (DSL) and piped processing language (ppl) without worrying about traditional relational database concepts such as tables columns or SQL statements As is common with full text indexing you can retrieve results based on the closeness of a match to your query This can be very useful when working with log data to understand and correlate a key problem or failure This whitepaper show s you how to provision an Amazon ES Cluster push log data from Amazon EC2 Instances into Amaz on Elasticsearch push Amazon CloudWatch Logs into Amazon Elasticsearch use AWS Lambda to send logs into Amazon Elasticsearch use Amazon Kinesis Firehose to load data into Amazon ES implement Kubernetes logging with EFK and Amazon Elasticsearch and con figure Alerting for Amazon ES What Is Elasticsearch? Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to create a domain and deploy operate and scale Elasticsearch clusters in the AWS Cloud An Amazon ES domain is a serv ice wrapper around an Elasticsearch cluster A domain contains the engine instances (nodes) that process Amazon ES requests the indexed data that you want to search snapshots of the domain access policies and metadata This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 3 The first public release of the E lasticsearch engine was issued in early 2010 and since then the Elasticsearch project has become one of the most popular open source projects on GitHub Based on Apache Lucene internally for indexing and search Elasticsearch converts data such as logs t hat you supply into a JSON like document structure using key value pairs to identify the strings and values that are present in the data In Elasticsearch a document is roughly analogous to a row in a database and it has the following characteristics: • Has a unique ID • Is a collection of fields (similar to a column in a database table ) In the following example of a document the document ID is 34171 The fields include first name last name and so on Note that document types will be deprecated in APIs in Elasticsearch 700 and completely removed in 800 Figure 1 – Example of an Elasticsearch document Elasticsearch supports a RESTful web services interface You can use PUT GET POST and DELETE commands to interface with an Elasticsearch index which is a logical collection of documents that can be split into shards Most users and developers use command line tools such as cURL to test these capabilities and run simple queries and then develop their applications in the language of their choice The following illustration shows an Amazon ES domain that has an index with two shards Shard A and Shard B This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 4 Figure 2 – Elasticsearch terminology You can think of an Amazon ES domain as a service wrapper around an Elasticsearch cluster and the logical API entry point to interfaces with the system A cluster is a logical grouping of one or more nodes and indices An index is a logical grouping of do cuments each of which has a unique ID Documents are simply groupings of fields that are organized by type An index can be further divided into shards The Lucene search engine in Elasticsearch executes on shards that contain a subset of all documents th at are managed by a given cluster Conventional relational database systems aren’t typically designed to organize unstructured raw data that exists outside a traditional database in the same manner as Elasticsearch Log data varies from semi structured (su ch as web logs) to unstructured (such as application and system logs and related error and informational messages) Elasticsearch does not require a schema for your data and is often orders of magnitude faster than a relational database system when used to organize and search this type of data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 5 Figure 3 – Amazon ES architecture Because Elasticsearch does not store data in a normalized fashion clusters can grow to 10s or 1000s of servers and petabytes of data Searches remain speedy because Elastic search stores documents that it creates in close proximity to the metadata that you search via the full text index When you have a large distributed system running on AWS there is business value in logging everything Elasticsearch helps you get as clos e to this ideal as possible by capturing logs on almost everything and making the logs easily accessible How Is Elasticsearch used? Many users initially start with Elasticsearch for consumption of logs (~50% of initial use cases involve logs) then event ually broaden their usage to include other searchable data Elasticsearch is also frequently used for marketing and clickstream analytics Some of the best examples of analytic usage come from the online retailing world where several major retailers use E lasticsearch One example of how they use the data is to follow the clickstream created by their order pipeline to understand buyer behavior and make recommendations either before or after the sale This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 6 Many log applications that target Elasticsearch also st art with the use of the Logstash agent and forwarder to transform and enrich their log data (such as geographic information and reformatting) prior to sending to their cluster Elasticsearch can produce analytic value in a relatively short period of time given the performance of its indexing engine The default index refresh rate is set at one second but is configurable given the size of your cluster and the rate of log ingestion B ecause Elasticsearch and Kibana are open source software it is not unusual to see enterprise customers providing Kibana web access across a large subset of desktops in departments that need to understand their customers better Amazon Elasticsearch Servic e (Amazon ES) provides support for cross cluster search enabling you to perform searches aggregations and visualizations across multiple Amazon ES domains with a single query or from a single Kibana interface With this feature you can separate heterog eneous workloads into multiple domains which provides better resource isolation and the ability to tune each domain for their specific workloads which can improve availability and reduce costs Trace Analytics is a new feature of Amazon Elasticsearch Se rvice that enables developers and IT operators to find and fix performance problems in distributed applications which leads to faster problem resolution times Trace Analytics is built using OpenTelemetry a Cloud Native Computing Foundation (CNCF) project that provides a single set of APIs libraries agents and collector services to capture distributed traces and metrics which enables customers to leverage Trace Analytics without having to re instrument their a pplications Trace Analytics is powered by the Open Distro for Elasticsearch project which is open source and freely available for everyone to download and use What about commercial monitoring tools? There are many popular commercial logging and monitori ng tools available from AWS partners such as Splunk Sumologic Loggly and Datadog These software asaservice (SaaS) and packaged software products provide real value and typically support a high level of commercial feature polish These packages gener ally require no installation or they are software packages that install very simply making getting started easy You might decide that you have enough spare time to devote to setting up Amazon ES and related log agents and that the capability it provides meets your requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 7 Your decision to pick Amazon ES versus commercial software should include the cost of labor to establish and manage the service the setup and configuration time for the AWS services that you are using and the server and applicat ion instance logs that you want to monitor Kibana’s analytics capabilities continue to improve but are still relatively limited when compared with commercial purpose built monitoring software Commercial monitoring and logging products such as the ones we mentioned typically have very robust user administration capabilities Why use Amazon ES ? If you use Amazon ES you will save considerable effort establishing and configuring a cluster as well as maintaining it over time Amazon ES automatically finds and replaces failed nodes in a cluster and you can create or scale up a cluster with a few cl icks in the console or a simple API call or command line interface (CLI) command Amazon ES also automatically configures and provisions a Kibana endpoint which you can use to begin visualizing your data You can create Kibana dashboards from scratch or import JSON files describing predefined dashboards and customize from there It is easy to provision an Amazon ES cluster You can use the Amazon ES console to set up and configure a domain in minutes If you prefer programmatic access you can use the AWS CLI or the AWS SDKs Following steps are typically what you need to do to provision an Amazon ES cluster: • Create a domain • Size the domain appropriately for your workload • Control access to your domain using a domain access policy or finegrained access control • Index data manually or from other AWS services • Use Kibana to search your data and create visualizations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 8 Best practices for configuring your Amazon ES domain When you configure your Amazon ES domain you choose the instance type and count for data and the dedicated master nodes Elasticsearch is a distributed service that runs on a cluster of instances or nodes These node types have different functions and require different sizing Data nodes store the data in your indexes and process indexing and query requests Dedicated master nodes don’t process these requests; they maintain the cluster state and orchestrate Amazon ES supports five instance classes : M R I C and T As a best practice use the latest generation instance type from each instance class For the latest supported instance classes see Supported instance types in Amazon Elasticsearch Service When choosing an instance type for your data nodes bear in mind that these nodes carry all the data in your indexes (storage) and do all the processing for your requests (CPU) As a best practice for heavy production workloads choose the R5 or I3 instance type If your emphasis is primarily on performance the R5 typically delivers the best performance for log analytics workloads and often for search workloads The I3 instances are strong contenders and may suit your workload better so you should test both If your emphasis is on cost the I3 instances have better cost efficiency at scale especially if you choose to purchase reserved instances For an entry level instan ce or a smaller workload choose the M5s The C5s are a specialized instance relevant for heavy query use cases which require more CPU work than disk or network Use the T2 or T3 instances for development or QA workloads but not for production When choosing an instance type for your dedicated master nodes keep in mind that these nodes are primarily CPU bound with some RAM and network demand as well The C5 instances work best as dedicated masters up to about 75 data node clusters Above that no de count you should choose R5 For log analytics use cases you want to control the life cycle of data in your cluster You can do this with a rolling index pattern Each day you create a new index then archive and delete the oldest index in the cluster You define a retention period that controls how many days (indexes) of data you keep in the domain based on your analysis needs For more information see Index State Management You should try to align your shard and instance counts so that your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monito r (Almost) Everything 9 shards distribute equally across your nodes You do this by adjusting shard counts or data node counts so that they are evenly divisible Elasticsearch Security and Compliance Security Amazon ES service is a managed service this means that AWS is responsible for security of the underlying infrastructure and operating system patching and management of Elasticsearch software while you are responsible for setup of service level security controls This would include areas such as management of authentication and access controls data encryption in motion and data encryption at rest Authentication and access control for Elasticsearch are implemented using a combination of Sigv4 signing and AWS IAM Integration with Sigv4 will be covered in greater depth during the setup of logging services For examples of IAM policies that can be used in security access to Amazon Elasticsearch using resource based policies identity based policies or IP based policies review these policy examples All Amazon Elasticsearch domains are created in a dedicated VPC This setup keeps the cluster secure and isolates inter node network traffic By default traffic within this isolated VPC is unencrypted but you can also enable node tonode TLS encryption This feature must be enabled at the time of Elasticsearch cluster creation To use this feature for an existing cluster you must create a new cluster and migrate your data Node tonode encryption requires Elasticsearch version 60 or later For enabling data encryption at rest Amazon ES service natively integrates with AWS Key Management Service ( AWS KMS) making it easy to secure data within Elasticsearch indices automated snapshots Elasticsearch logs swap files and all data in the application directory This option along with node tonode encryption must be set up during domain creation Encryption of data at rest requires Elasticsearch 51 or later Encryption of manual snapshots and encryption of slow logs a nd error logs must also be configured separately Manual snapshots can be encrypted using server side encryption in S3 For more details see Registering a manual snapshot repository If published to Amazon CloudWatch slow logs and error logs can be encrypted using the same KMS master key as the ES domain For more information see Encrypt log data in CloudWatch Logs using AWS KMS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 10 Amazon ES Service o ffers fine grained access control (FGAC) which adds multiple capabilities to give you tighter control over your data FGAC features include the ability to use roles to define granular permissions for indices documents or fields and to extend Kibana with readonly views and secure multi tenant support Two forms of authentication and authorization are provided by FGAC: a built in user database which makes it easy to configure usernames and passwords inside of Elasticsearch and AWS Identity and Access M anagement (IAM) integration which lets you map IAM principals to permissions Powered by Open Distro for Elasticsearch which is an Apache 20 licensed distribution of E lasticsearch Fine grained access control is available on domains running Elasticsearch 67 and higher Compliance By choosing to use the Amazon Elasticsearch service you can greatly reduce your compliance efforts by building compliant applications on to p of existing AWS compliance certifications and attestations Amazon Elasticsearch Service is HIPAA Eligible You can use Amazon Elasticsearch Service to stor e and analyze protected health information (PHI) and build HIPAA compliant applications To set up visit AWS Artifact in your HIPAA accounts and agree to the AWS Business Associate Agreement This BAA can be set up for individual AWS accounts or for all of the accounts under your AWS Organization supervisory account Amazon Elasticsearch Service is also in scope of AWS Payment Card Industry Data Security Standard (PCI DSS ) which allow s you to store process or transmit cardholder data using the service Additionally Amazon Elasticsearch Service is in scope for the AWS ISO 9001 27001 27017 an d 27018 certifications PCI DSS and ISO are among the most recognized global security standards for attesting to quality and information security management in the cloud AWS Config is a service that continuously monitors the configuration of AWS Services for compliance and can automate remediation actions using AWS Config rules In the case of Amazon Elasticsearch Service you should consider enabling Config rules such as: • elasticsearch invpconly – This checks whether the Amazon Elasticsearch cluster i s deployed in a VPC and is NON_Compliant if the ES domain is public This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsear ch Service to Log and Monitor (Almost) Everything 11 • elasticsearch encrypted atrest – This will do a check to ensure Amazon Elasticsearch domains have been deployed with encryption at rest enabled and is NON_Compliant if the EncryptionAtRe stOptions field is not enabled Amazon Elasticsearch Service offers a detailed audit log of all Elasticsearch requests Audit Logs allows customers to record a trail of all user actions helping meet compliance regulations improving the overall security posture and providing evidence for security investigations Amazon Elasticsearch Service Audit Logs allows customers to log all of their user activity on their Elasticsearch clusters including keeping a history of user authentication success and failures logging all requests to Elasticsearch modifications to indices recording incoming search queries and much more Audit Logs provides a default configuration that covers a popular set of user actions to be tracked Administrators can further configure a nd fine tune the settings to meet their needs Audit Logs is integrated with Fine Grained Access Control allowing you the ability to log access or modification requests to sensitive documents or fields to meet any compliance requirements Once configure d Audit Logs will be continuously streamed to CloudWatch Logs and can be further analyzed there Audit Logs settings can be changed at any time and are automatically updated Both new and existing Amazon Elasticsearch Service domains (version 67+) with F ine Grained Access Control enabled can use the Audit Logs feature Multi Account Log aggregation use case An important part of every large enterprise AWS deployment is a multi account strategy that is setup using either AWS Control Tower or AWS Landing Zo nes This creates a core for the centralized governance of accounts including the aggregation of the logs from all of a customer’s accounts into one centralized account where they can be ingested into Elasticsearch to be correlated and monitored in one central location It can include logs from services and components such as CloudTrail Logs CloudWatch Log Groups VPC Flow Logs AWS Config Logs and Amazon GuardDuty Logs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 12 In the case of CloudWatch logs these can be streamed directly to Elasticsearch f rom all accounts in a customers’ organization using methods described in Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis Because Amazon ES runs in an AWS managed VPC and not in a VPC that you control you must secure access to it and the Kibana dashboards that you use with it There are two starting points for this: • IP address restrictions configured with EC2 Security Groups • HTTP basic Auth configured through an nginx proxy that sits in front of the Amazon ES endpoint Using nginx with SSL/TLS to provide user administration and block all other traffic should be implemented prior to using this method with production data as the first two methods are relatively weak security methods Beyond these two basic controls the preferred method for securing access to Kibana is to enable access using AWS Single Sign On or your o wn Federation service This setup will allow for only users within your Microsoft Active Directory access to visualize data stored in Elasticsearch It uses a standard SAML identity federation approach and a specific Active Directory group can be used to r estrict access to an Amazon Elasticsearch domain If you do not already have an Active Directory Domain with your users set up another option would be to use Amazon Elasticsearch Service native integration with Amazon Cognito User Pools to manage access This approach provides user level access control to Kibana access to ES domains and the ability to set polic ies for groups of users within the Amazon Cognito User Pool UltraWarm storage for Amazon ES UltraWarm provides a cost effective way to store large amou nts of read only data on Amazon Elasticsearch Service Standard data nodes use "hot" storage which takes the form of instance stores or Amazon EBS volumes attached to each node Hot storage provides the fastest possible performance for indexing and search ing new data UltraWarm nodes use Amazon S3 and a sophisticated caching solution to improve performance For indices that you are not actively writing to query less frequently and don't need the same performance as hot storage UltraWarm offers signific antly lower costs per GiB of data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 13 In Elasticsearch these warm indices behave just like any other index You can query them using the same APIs or use them to create dashboards in Kibana Because UltraWarm uses Amazon S3 it does not incur overhead which was typically from hot storage When calculating UltraWarm storage requirements you consider only the size of the primary shards The durability of data in S3 removes the need for replicas and S3 abstracts away any operating system or service considera tions Each UltraWarm node can use 100% of its available storage for primary data Pushing Log data from EC2 instances into Amazon ES While many Elasticsearch users favor the “ELK” (Elasticsearch Logstash and Kibana) stack a serverless approach using A mazon CloudWatch Logs has some distinct advantages You can consolidate your log feeds install a single agent to push application and system logs remove the requirement to run a Logstash cluster on Amazon EC2 and avoid having any additional monitoring o r administration requirements related to log management However before going serverless you might want to review and consider whether you will need some of the more advanced Logstash transformation capabilities that the CloudWatch Logs agent does not s upport The following process shows how to set up CloudWatch Logs agent on an Ubuntu EC2 instance to push logs to Amazon ES AWS Lambda lets you run code without provisioning or managing servers As logs come in AWS Lambda runs code to put the log data in the right format and move it into Amazon ES using its API Figure 4 – CloudWatch Logs architecture This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 14 You will be prompted for the location of the application and system logs datestamp format and a starting point for the log upload Your logs will be st ored in CloudWatch and you can stream them into Amazon ES You can perform the preceding steps for all EC2 instances that you want to connect to CloudWatch you can use the EC2 Run command to install across a fleet of instances or you can build a boot script to use with auto scaled instances To connect a CloudWatch stream to Amazon ES follow the steps in the AWS documentation Streaming CloudWatch Logs data to Amazon Elasticsearch Service using the name of an Amazon ES domain previously created to subscribe your new log group to Amazon ES Note that there are several log formatting options that you might want to review during the connection process and you c an exclude log information that is not of interest to you You will be prompted to create an AWS Lambda execution role because AWS uses Lambda to integrate your CloudWatch log group to Amazon ES You have now created an Amazon ES domain and configured one or more instances to send data to CloudWatch Logs which then can be forwarded to Amazon ES via Lambda Pushing Amazon CloudWatch Logs into Amazon ES The CloudWatch Logs → Lambda → Amazon ES integration makes it easy to send data to Elasticsearch if source data exists in CloudWatch Logs The following figure shows the fe atures and services that you can use to process different types of logs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 15 Figure 5 – Pushing CloudWatch Logs into Amazon ES • AWS API activity logs (AWS CloudTrail) : AWS CloudTrail tracks your activity in AWS and provides you with an audit trail for API activity in your AWS account The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service • You should enable CloudTrail logging for all AWS Regions CloudTrail logs can be sent to Amazon S3 or to CloudWatch Logs; for the purposes of sending logs to Amazon ES as a final destination it is easier to send to CloudWatch Logs • Network activity logs (VPC Flow Lo gs): VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your Amazon Virtual Private Cloud (Amazon VPC) VPC Flow Log data is stored as CloudWatch Logs • Application logs from AWS L ambda functions: Application logs from your Lambda code are useful for code instrumentation profiling and general troubleshooting In the code for your AWS Lambda functions any console output that typically would be sent to standard output is delivered as CloudWatch Logs For example: consolelog() statements for Nodejs functions print() statements for Python functions and Systemoutprintln() statements for Java functions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 16 Using AWS Lambda to send Logs into Amazon ES For maximum flexibility you can use AWS Lambda to send logs directly to your Elasticsearch domain Custom logic in your Lambda function code can then perform any desired data processing cleanup and normalization before sending the log data to Amazon ES This approach is highly flexible However it does require technical understanding of how AWS Signature Version 4 security works For security purposes in order to issue any queries or updates agai nst an Elasticsearch cluster the request must be signed using AWS Signature Version 4 (“SigV4 signing”) Signature Version 4 is the process to add authentication information to AWS requests Rather than implementing SigV4 signing on your own we highly re commend that you adapt existing SigV4 signing code For the CloudWatch Logs →Lambda→Amazon ES integration described earlier the Lambda code for implementing SigV4 signing is automatically generated for you If you inspect the code associated with the aut o generated Lambda function you can view the SigV4 signing code that is used to authenticate against the Elasticsearch cluster You can copy the code as a starting point for your Lambda functions that need to interact with the Amazon ES cluster Another example of code implementing SigV4 signing is described in the AWS blog post How to Control Access to Your Amazon Elasticsearch Service Domain Using the AWS SDKs based on your programming language of choice will also take c are of the heavy lifting of SigV4 signing making this process much easier This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 17 Figure 6 – Overview of Lambda to Amazon ES data flow These AWS event sources can provide data to your Lambda function code and your Lambda function code can process and send that data to your Amazon ES cluster For example log files stored on S3 can be sent to Amazon ES via Lambda Streaming data sent to an Amazon Kinesis stream can be forwarded to Amazon ES via Lambda A Kinesis stream will scale up to handle very high log d ata rates without any management effort on your part and AWS will manage the durability of the stream for you For more information about the data provided by each of these AWS event sources see the AWS Lambda documentation The S3→Lambda→Amazon ES integration pattern is a particularly useful one As one example many AWS powered websites store their web access logs in Amazon S3 If your website uses Amazon Cloud Front (for global content delivery) Amazon S3 (for static website hosting) or Elastic Load Balancing (for load balancers in front of your web servers) then you should enable the access logs for each service There is no extra charge to enable logging other than the cost of storage for the actual logs in Amazon S3 Once the log files are in Amazon S3 you can process them using Lambda and send them to Amazon ES where you can analyze your website traffic using Kibana This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 18 Using Amazon Kinesis Data Firehose to load data into Amazon ES You can use Amazon Kinesis Data Firehose to transform your data and load it to Amazon ES This approach requires you to install the Amazon Kinesis agent on the EC2 instances that you want to monitor You don’t need to transmit log information to CloudWatch Logs Because Kinesis Data Firehose is a highly scalable managed service you can transmit log data from hundreds or thousands of instances in a very large installation You should consider Kinesis Data Firehose if you have the following requirements: • Large s cale log monitoring installation • Serverless approach to transforming and loading log data Simultaneously store logs in an S3 bucket for compliance or archival purposes while continuously transmitting to Amazon ES Amazon Kinesis Data Firehose is a rich and powerful real time stream management system that is directly integrated with Amazon ES The following illustration shows the flow of logs managed by Kinesis Data Firehose into Amazon ES Figure 7 – Overview of F irehose to Amazon ES data flow This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 19 Support for Apache Web logs is built in to Amazon Kine sis Data Firehose To help you evaluate Amazon Kinesis Data Firehose for log analytics using Amazon ES as a target see th e tutorial Build a Log Analytics Solution Implement Kubernetes logging with EFK and Amazon ES The combination of Fluentd unified logging Elasticsearch RESTFul analytics engine and Kibana for visualizations is known as the EFK stack Fluentd is configured as a DaemonSet where it collects logs and forwards to Cloudwatch Logs where they can be filtered using a subscription filter and then sent to an ES domain for further querying and visualization This AWS workshop – Implement Logging with EFK will walk you through the setup of Kubernetes logging to the EFK stack Figure 8 – Setup of Kubernetes logging to EFK Stack AWS is also supporting Fluent Bit for streaming logs from containerized applications to AWS and partners’ solutions for log retention and analytics With the Fluent Bit plugin for AWS co ntainer images you can route logs to Amazon CloudWatch and Amazon Kinesis Data Firehose destinations (which includes the Amazon Elasticsearch Service) The blog post Centralized Container Logging with Fluent Bit contains more information on relative performance of Fluent Bit versus Fluentd and the advantages it offers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 20 Setting up Kibana to visualize Logs One advantage of Amazon ES is that Kibana is set up and ready to configure after you create your search domain When you first start Kibana you are prompted to configure an index pattern Community support for Kibana has produced several types of useful Kibana dashboards that are preconfigured The main GitHub repository contains dashboards to visualize: • Amazon Elasticsearch Cluster statistics (KOPF) • Amazon VPC Flow Logs • AWS CloudTrail Logs • AWS Lambda Logs Remember the requirement to lock down access to Kibana for all users A best practice for this would be to use a cor porate LDAP or Active Directory Service to manage access to Kiban a Alerting for Amazon ES The Amazon ES alerting feature notifies you when data from one or more Elasticsearch indices meets certain conditions For example you might want to receive an emai l if your application logs more than five HTTP 503 errors in one hour or you might want to page a developer if no new documents have been indexed in the past 20 minutes Alerting requires Elasticsearch 62 or higher Compared to Open Distro for Elasticsearch the Amazon ES alerting feature has some notable differences Amazon ES supports Amazon SNS for notifications This integration with Amazon SNS means that in addition to standard destinations (Slack custom webhooks and Amazon Chime) the alerting feature can send emails text messages and even run AWS Lambda functions using SNS topics The alerting feature supports finegrained access control You can mix and match permissions to fit your use case s Other configuration options Once you have Clou dWatch Logs flowing into Amazon ES make sure you have all of the other types of AWS logs enabled (such as CloudTrail Logs) As you add new log This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Use Amazon Elasticsearch Service to Log and Monitor (Almost) Everything 21 types you can add or configure additional Kibana dashboards to match the inbound log pattern In addition yo u can use the Amazon ES anomaly detection feature to automatically detect anomalies in your log data in near real time by using the Random Cut Forest (RCF) mach ine learning algorithm You can use Trace Analytics to help you visualize this flow of events and identify performance problems Conclusion This whitepaper explained what Elasticsearch is It also covered how to use it compar ed it with commercial monitoring tools – and explored why you would want to use Amazon Elasticsearch In addition it covered how to configure Amazon Elasticsearch – as well as how to push logs into it from Amazon EC2 Amazon CloudWatch AWS Lambda and Amazon Kinesis Firehose Finally it also explained the setup of Kibana for visualization of logs and alerting for Amazon ES Contributors The following individuals and organizations contributed to this document: Jim Tran Principal Product Manager AWS Pete Buonora Principal Solutions Architect AWS Changbin Gong Senior Solutions Architect AWS Naresh Gautam Senior Analytics Specialist Architect AWS
|
General
|
consultant
|
Best Practices
|
Use_AWS_Config_to_Monitor_License_Compliance_on_Amazon_EC2_Dedicated_Hosts
|
ArchivedUse AWS Config to M onitor License Compliance on Ama zon EC2 Dedicated Hosts April 2016 This paper has been archived For the latest technical guidance about Amazon EC2 see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 2 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 3 of 16 Contents Abstract 4 Introduction 4 Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances 5 Creating a Custom Rule to Check that Launched Instances Are on a Dedicated Host 7 Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules 15 Conclusion 15 Contributors 16 Further Reading 16 ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 4 of 16 Abstract Amazon Elastic Compute Cloud (EC2) Dedicated Hosts can help enterprises reduce costs by allowing the use of existing serverbound licenses Many customers can also use Dedicated Hosts to address corporate compliance and regulatory requirements Oftentimes customers using Dedicated Hosts want to continuously record and evaluate changes to their infrastructure to stay compliant with license terms and regulatory requirements This paper outlines the ways in which you can leverage AWS Config and AWS Config Rules to monitor license compliance on Amazon EC2 Dedicated Hosts Introduction This paper discusses how you can set up AWS Config to record configuration changes to Amazon EC2 Dedicated Hosts and EC2 instances in order to ascertain your licensing compliance posture Y ou’ll learn how t o create AWS Config Rules to govern the way your serverbound licenses are used on Amazon Web Services (AWS) We’ll create a sample rule that checks whether all instances in an account created from an Amazon Machine Image (AMI) called MyWindowsImage are launched onto a specific Dedicated H ost We’ll also describe other checks that can be employed to monitor compliance with common licensing restrictions and to govern your Dedicated Host resources An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated for your use You get complete visibility into the number of sockets and physical cores that support your instances on a Dedicated Host Dedicated Hosts allow you to place your instances on a specific physical server This level of visibility and control in turn allows you to use your existing per socket percore or pervirtual machine ( VM) software licenses (eg Microsoft Windows Server) to save costs and meet compliance and regulatory requirements To track the history of instances that are launched stopped or terminated on a Dedicated Host you can use AWS Config AWS Config pairs this information with host and instancelevel information relevant to software licensing such as ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 5 of 16 the host ID AMI IDs and number of sockets and physical cores per host You can then use this data to verify usage against your licensing metrics You can use AWS Config Rules to choose from a set of prebuilt rules based on common AWS best practices or define custom rules You can set up rules that check the validity of changes made to resources tracked by AWS Config against policies and guidelines defined by you You can set these AWS Config Rules to evaluate each change to the configuration of a resource or you can execute them at a set frequency You can also author your own custom rules by creating AWS Lambda functions in any supported language Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances Open the AWS Management Console and go to the EC2 console On the EC2 Dedicated Hosts page notice the Edit Config Recording button at the top The icon in red indicates that AWS Config is not currently set up to record configuration changes to Dedicated Hosts and instances Figure 1: Edit Config Recording Button with the Red Icon on Dedicated Host Console ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 6 of 16 Getting started with AWS Config is simple Click the Edit Config Recording button to open the AWS Config settings page On this page check Record all resources supported in this region Figure 2: Selecting Resource Types to Record on the AWS Config Settings Page You can choose to only enable recording for Dedicated Hosts and instances by selecting these resources in Specific types If you are setting up AWS Config for the first time you must specify an Amazon S3 bucket into which AWS Config can deliver configuration history and snapshot files Optionally you can also provide an Amazon Simple Notification Service (SNS) topic to which change and compliance notifications will be delivered Finally you’ll be asked to grant appropriate permissions to AWS Config and save the settings For more details on setting up AWS Config using the AWS Management Console or the CLI see the Getting Started with AWS Config documentation ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 7 of 16 After the AWS Config setup is complete you’ll notice that the icon on the EC2 console page for Dedicated Hosts has turn ed green This indicates that AWS Config is recording configuration changes to all EC2 instances and Dedicated Hosts Figure 3: The Edit Config Recording Button with Green Icon Creating a Custom Rule to Check that Launched Instances Are o n a Dedicated Host Now that you have set up AWS Config to start recording configuration changes to Dedicated Hosts and EC2 instances you can start writing rules to evaluate the license compliance state of all instances in the account To get started you will write a rule that checks whether all instances launched from the MyWindowsImage AMI are placed onto a specific Dedicated Host For this sample assume that MyWindowsImage is the name of an AMI you have imported and is the machine image of a Microsoft Server license you own Before creating the rule first inspect the instances and Dedicated Hosts on your account: Look up EC2 Instance and EC2 Host resource types In Figure 4 you can see one Dedicated Host and a number of instances ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 8 of 16 Figure 4: Review the Resource Inventory Click the icon for the Dedicated Host to go to the Config Timeline to see the configuration of the Dedicated Host including the sockets cores total vCPUs and available vCPUs You can also see all the instances that are currently running on the host Traversing the timeline provides all historical configurations of the Dedicated Host including the instances that were launched onto the Dedicated Host in the past You also can look into the Config timeline of each of those instances Figure 5: The Config Resource Configuration History Timeline ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 9 of 16 Next you will set up the new rule in AWS Config and write the AWS Lambda function for the rule To do this click Add rule in the AWS Config console and then click Create AWS Lambda function to set up the function you want to execute Figure 6: AWS Config Rule Creation Page ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 10 of 16 On the Lambda console select the configrulechangetriggered blueprint to get sta rted Figure 7: The Lambda Select Blueprint Page You can annotate compliance states To do this first add a global variable called annotation var aws = require( 'aws sdk'); var config = new awsConfigService(); var annotation; You also need to modify the evaluateCompliance function and the handler invoked by AWS Lambda The rest of the blueprint code can be left untouched function evaluateCompliance(configurationItem ruleParameters context) { checkDefined(configurationItem " configurationItem "); checkDefined(configurationItemconfiguration "configurationItem configuration "); checkDefined(ruleParameters " ruleParameters "); if ( 'AWS::EC2::Instance' !== configurationItemresourceType) { return 'NOT_APPLICABLE' ; } if (ruleParametersimageId === configurationItemconfigurationimageId ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 11 of 16 && ruleParametershostId !== configurationItemconfigurationplacementhostId) { annotation = "Instance " + configurationItemconfigurationinstanceId + " launc hed from BYOL AMI " + configurationItemconfigurationimageId + " has not been placed on dedicated host " + ruleParametershostId ; return 'NON_COMPLIANT' ; } else { return 'COMPLIANT' ; } For this example function imageId and hostId are parameters that are passed to the function by the AWS Config rule that will be created next The imageId parameter will contain the AMI ID of MyWindowsImage Use this to identify instances that are launched from this image After you detect that an instance was launched from MyWindowsImage you then can check whether the instance was launched onto the specified Dedicated Host identified by the hostId parameter The instance is marked noncompliant if it is found to be not running on the host on which all instances launched from MyWindowsImage should be running You can annotate compliance states of a resource with additional information indicating why the resource was marked noncompliant This sample elaborates the details of why the instance was marked noncompliant and assigns this text to the annotation global variable Finally changes are made to the handler to pass on the annotation along with the rest of the compliance information ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 12 of 16 putEvaluationsRequestE valuations = [ { ComplianceResourceType: configurationItemresourceType ComplianceResourceId: configurationItemresourceId ComplianceType: compliance OrderingTimestamp: configurationItemconfigurationItemCaptureTime Annotation: annotation } ]; After changes are made to the AWS Lambda function select the appropriate role and save the function In our example we also noted the Amazon Resource Name (ARN) of the function After the function is created go back to the AWS Config console and enter the ARN of the function that was just created ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 13 of 16 Figure 8: Entering the AWS Lambda Function ARN on the AWS Config Rul e Creation Page After specifying the appropriate settings for the rule save it The rule is evaluated once immediately after it is created and thereafter for any changes that are made to EC2 instances In this example two instances were launched from MyWindowsImage out of which only one was launched onto the specified Dedicated Host The AWS Config rule marks the other instance noncompliant Figure 9: Instance Marked as Noncompliant ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 14 of 16 The Compliant or Noncompliant state for each rule is also sent as a notification via the Amazon SNS topic you created when you set up AWS Config You can configure these notifications to send an email trigger a corrective action or log a ticket The Amazon SNS notification contains details about the change in compliance state including the annotation that elaborates the reason for noncompliance View the Timeline for this Resource in AWS Config Management Console: https://consoleawsamazoncom/config/home?region=useast 1#/timeline/AWS::EC2::Instance/ia46d7125?time=2016 0128T02:02:35606Z New Compliance Change Record: { "awsAccountId": "434817024337" "configRuleName": "restrictedAMI" "configRuleARN": "arn:aws:config:us east 1:434817024337:config rule/config rule hz8yxz" "resourceType": "AWS::EC2::Instance" "resourceId": "i a46d7125" "awsRegion": "us east 1" "newEvaluati onResult": { "evaluationResultIdentifier": { "evaluationResultQualifier": { "configRuleName": "restrictedAMI" "resourceType": "AWS::EC2::Instance" "resourceId": "i a46d7125" } "orderingTimestamp": "2016 0128T02:02:35606Z" } "complianceType": "NON_COMPLIANT" "resultRecordedTime": "2016 0128T02:02:41417Z" "configRuleInvokedTime": "2016 0128T02:02:40396Z" "annotation": "Instance i a46d7125 launched from BYOL AMI ami 60b6c60a has not been placed on dedicated host h 086f4a5066fb7b991" "resultToken": null } "oldEvaluationResult": { "evaluationResultIdentifier": { "evaluationResultQualifier": { "configRuleName": "restrictedAMI" "resourceType": "AWS::E C2::Instance" "resourceId": "i a46d7125" } "orderingTimestamp": "2016 0128T01:44:54553Z" ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 15 of 16 } "complianceType": "COMPLIANT" "resultRecordedTime": "2016 0128T01:45:03438Z" "configRuleInvokedTime": "2016 0128T01:45:01298Z" "annotation": null "resultToken": null } "notificationCreationTime": "2016 0128T02:02:42317Z" "messageType": "ComplianceChangeNotification" "recordVersion": "10" } Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules The AWS Config rule created in the example above checks one of the several compliance requirements you may have associated with BYOL serverbound licenses This rule can be further extended to check other licensespecific restrictions such as the following : Host affinity of the instances Number of sockets or number of cores of the Dedicated Host onto which the instances are launched Duration for which an instance needs to be on a specified Dedicated Host In addition you can also monitor the utilization of Dedicated Hosts you own and mark them noncompliant if their usage drops below a threshold This can help you optimize your fleet of Dedicated Hosts Conclusion In this paper you learned how you can use AWS Config in conjunction with AWS Config r ules to ascertain your license compliance posture on Amazon EC2 Dedicated Hosts AWS Config can be more broadly used to monitor and govern all your resources For more information see Further Reading below ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document: Chayan Biswas Senior Product Manager AWS Config Further Reading For additional help please consult the following sources: Documentation on what AWS Config supports : Supported Resources Configuration Items and Relationships Blog post: How to Record and Govern your IAM Resource Configurations Using AWS Config AWS Config product page: AWS Config
|
General
|
consultant
|
Best Practices
|
Use_AWS_WAF_to_Mitigate_OWASPs_Top_10_Web_Application_Vulnerabilities
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Use AWS WAF to Mitigate OWASP ’s Top 10 Web Application Vulnerabilities July 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own inde pendent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Web Application Vulnerability Mitigation 2 A1 – Injection 3 A2 – Broken Authentication and Session Management 5 A3 – Cross Site Scripting (XSS) 7 A4 – Broken Access Control 9 A5 – Security Misconfiguration 12 A6 – Sensitive Data Exposure 15 A7 – Insufficient Attack Protection 16 A8 – Cross Site Request Forgery (CSRF) 19 A9 – Using Components with Known Vulnerabilities 21 A10 – Underprotected APIs 23 Old Top 2013 A10 – Unvalidated Redirects and Forwards 24 Companion CloudFormation Template 26 Conclusion 29 Contributors 30 Further Reading 30 Document Rev isions 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract AWS WAF is a web application firewall that helps you protect your websites and web applications against various attack vectors at the HTTP protocol level This paper outlines how you can use the service to mitigate the application vulnerabilities that are defined in the Open Web Application S ecurity Project (OWASP) Top 10 list of most common categories of application security flaws It’s targeted at anyone who ’s tasked with protecting websites or applications and maintain ing their security posture and availability This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 1 Introduction The Open Web Application Security Project (OWASP) is an online community that creates freely available articles methodologies documentation tools and technologies in the field of web application secu rity1 They publish a ranking of the 10 most critical web application security flaws which are known as the OWASP Top 10 2 While the current version was published in 2013 a new 2017 Release Candidate version is currently available for public review The OWASP Top 10 represents a broad consensus of the most critical web application security flaws It’s a widely accepted metho dology for evaluat ing web application security and build mitigation strategies for websites and web based applications It outlines the top 10 areas where web applications are susceptible to attacks and where com mon vulnerabilities are found in such workl oads For any project aimed at enhancing the security profile of websites and web based applications it’s a great idea to understand the OWASP Top 10 and how it relate s to your own workloads This will help you implement effective mitigation strategies AWS WAF is a web application firewall (WAF) you can use to help protect your web applications from common web exploits that can affect application availability compromise security or consume excessive resources3 With AWS WAF you can allow or block requests to your web applications by defining customizable web security rules Also y ou can use AWS WAF to create rules to block common attack patterns as well as specific attack patterns targeted at your application AWS WAF works with Amazon CloudFront 4 our global content delivery network (CDN) service and the Application Load Balancer option for Elastic Load Balancing 5 By u sing these together you can analyze incoming HTTP requests apply a set of rules and take actions based on the matching of those rules AWS WAF can help you mitigate the OWASP Top 10 and othe r web application security vulnerabilities because attempts to exploit them often have common This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 2 detectable patterns in the HTTP requests You can write rules to match the patterns and block those requests from reaching your workloads However it ’s importan t to understand that using any web application firewall does n’t fix the underlying flaws in your web application It just provides an additional layer of defense which reduc es the risk of them being exploited This is especially useful in a modern develop ment environment where software evolves quickly Web Application Vulnerability Mitigation In April 2017 OWASP released the new iteration of the Top 10 for public comment The categories listed in the new proposed Top 10 are many of the same application fl aw categories from the 2013 Top 10 and past versions: A1 Injection A2 Broken Authentication and Session Management A3 Cross Site Scripting (XSS) A4 Broken Access Control (NEW) A5 Security Misconfiguration A6 Sensitive Data Exposure A7 Insufficient Attack Protection (NEW) A8 Cross Site Request Forgery (CSRF) A9 Using Components with Known Vulnerabilities A10 Underprotected APIs (NEW) The new A4 category consolidates the categories Insecure Direct Object References and Missing Function Level Access Controls from the 2013 Top 10 The previous A10 category Unvalidated Redirects and Forwards has been replaced with a new category that focus es on Application Programming Interface (API) security In this paper we discuss both old and new categories You can deploy AWS WAF to effectively mitigate a representative set of attack vectors in many of the categories above It can also be effective in other categories However the effectiveness depends on the specific workload that’s protected and the ability to derive recognizable HTTP request patterns Given that the attacks and exploits evolve constantly it ’s highly unlikely that any one web application firewall can mitigate all possible scenarios of an attack that target s flaws in any of these categori es This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 3 This paper describes recommendations for each category that you can implement easily to get started in mitigating application vulnerabilities At the end of the paper you can download an example AWS CloudFormation template that implement s some of the generic mitigations discussed here However be aware that the applicability of these rules to your particular web application can vary A1 – Injection Injection flaws occur when an application sends untrusted data to an interpreter6 Often the interpreter has its own domain specific language By using that language and inserting unsanitized data into requests to the interpreter an attacker can alter the intent of the requests and cause unexpected actions Perhaps the most well known and widespread injection flaws are SQL injection flaws These occur when input isn’t properly sanitized and escaped and the values are inserted in SQL statements directly If the values t hemselves contain SQL syntax statements the database query engine executes those as such This trigger s actions that weren’t originally intended with potentially dangerous consequences Credit: XKCD: Exploits of a Mom published by permission Using AWS WAF to Mitigate SQL injection attacks are relatively easy to detect in common scenarios They’ re usually detected by identifying enough SQL reserved words in the HTTP re quest components to signal a potentially valid SQL query However more complex and dangerous variants can spread the malicious query (and associated key words) over multiple input parameter or request components based on the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 4 internal knowledge of how the application composes them in the backend These can be more difficult to mitigate using a WAF alone —you might need to address them at the application level AWS WAF has built in capabilities to match and mitigate SQL injection attacks You can use a SQL i njection match condition to deploy rules to mitigate such attacks7 The following table provides some common condition configurations: HTTP Request Component to Match Relevant Input Transformations to Apply Justification QUERY_STRING URL_DECODE HTML_ENTITY_DECODE The most common component to match Query string parameters are frequently used in database lookups URI URL_DECODE HTML_ENTITY_DECODE If your application is using friendly dirified or clean URLs then parameters m ight appear as part of the URL path segment —not the query string (they are later rewritten server side) For example: https://examplecom/products/<product_id>/reviews/ BODY URL_DECODE HTML_ENTITY_DECODE A common component to match if your application accepts form input A WS WAF only evaluates the first 8 KB of the body content HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE A less common component to match But if your application uses cookie based parameters in database lookups consider matching on this component as wel l HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE A less common component to match B ut if your application uses the value of this header for database validation consider matching on this component as well Additionally consider any other components of custom request headers that your application uses as parameters for database lookups You might want to match these components in your SQL injection match condition Other Considerations Predictably this det ection pattern is less effective if your workload legitimately allows users to compose and submit SQL queries in their requests For those cases consider narrowly scoping an exception rule that bypasses the SQL injection rule for specific URL patterns tha t are known to accept such input You can do that by using a SQL injection match condition as described in the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 5 preceding table while listing the URLs that are excluded from checking by using a string match condition : 8 Rule action: BLOCK when request matches SQL Injection Match Condition and request does not match String Match Condition for excluded Uniform Resource Identifiers ( URI) You can also mitigate o ther types of injection vulnerabilities against other domain specific languages to varying degrees using string match conditions —by matching against kno wn key words assuming they ’re not also legitimate input values A2 – Broken Authentication and Session Management Flaws in the implementation of au thentication and session management mechanisms for web applications can lead to exposure of unwanted data stolen credentials or sessions and impersonation of legitimate users9 These flaws are difficult to mitigate using a WAF Broadly attackers rely on vulnerabilities in the way client server communication is implemented Or they target how session or authorization tokens are generated stored transferred reused timed out or invalidated by your application to obtain these credentials After they obt ain credentials attackers impersonate legitimate users and make requests to your web applications using those tokens For example if an attacker obtains the JWT token that authoriz es communication between your web client and the RESTful API they can impersonate that user until the token expires by launching HTTP requests with the illicitly obtained authorization token 10 Using AWS WAF to Mitigate Because illicit requests with stolen authorization credentials sessions or tokens are hard to distinguish from legitimate ones AWS WAF takes on a reactive role This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 6 After your own application security controls are able to detect that a token was stolen you can add that token to a blacklist AWS WAF rule This rule block s further requests with those signatures either permanently or until they expire You can also automate t his reaction to reduce mitigation time AWS WAF offers an API to interact with the service11 For this kind of solution you would use infrastructure specific or application specific monitoring and logging tools to look for patterns of compromise Automation of AWS WAF rules is discussed in greater detail under A7 – Insufficient Attack Protection To build a blacklist use a string match condition The following table provides some example patterns: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING Avoid exposing session tokens in the URI or QUERY_STRING because they’re visible in the browser address bar or server logs and are easy to capture URI HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE CONTAINS Session ID or access tokens HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE CONTAINS JWT token or other bearer authorization tokens You can use various mechanisms to help detect leaked or misused session tokens or authorization tokens One mechanism is to k eep track of client devices and the location where a user commonly accesses your application from This gives you the ability to quickly detect if requests are made from an entirely different location or client device with the same tokens and blacklist them for safety AWS WAF also supports rate based rules Rate based rules trigger and block when the rate of requests from a n IP address exceeds a customer defined threshold (request s per 5min interval ) You can combine t hese rules with other predicates (conditions) that are available in AWS WAF You can enforce rate based limits to protect your applications’ authentication or authorization URLs and endpoints against brute force attack attempts to guess credentials You can also use a string match condition to match authentica tion URI paths of the application: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 7 HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE HTML_ENTITY_DECODE STARTS_WITH /login (or relevant application specific URLs) This condition is then used inside a rate based rule with the desired threshold for requests originating from a given IP address : Rule action: BLOCK; rate limit: 2000; rate key: IP Only requests that match the string match condition are counted When that count exceeds 2000 requests per 5minute interval the originating IP address is blocked The minimum rate limit over a 5 minute you can set is 2000 requests A3 – Cross Site Scripting (XSS) Cross site scripting (XSS) flaws occur when web applications include user provided data in webpages that is sent to the browser without proper sanitization 12 If the data isn’t proper ly validat ed or escap ed an attacker can use those vectors to embed scripts inline frames or other objects into the rendered page (reflection) These in turn can be used for a variety of malicious purposes including stealing user credentials by using key loggers in order to install system malware The impact of the attack is magnified if that user data persist s server side in a data store and then delivered to a large set of other users Consider the example of a common but popular blog that accept s user commen ts If user comments aren’t correctly sanitized a malicious user can embed a malicious script in the comments such as: <script src=”https://malicious sitecom/exploitjs ” type=”text/javascript” /> The code then gets executed anytime a legitimate user lo ads that blog article This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 8 Using AWS WAF to Mitigate XSS attacks are relatively easy to mitigate in common scenarios because they require specific key HTML tag names in the HTTP request AWS WAF has built in capabilities to match and mitigate XSS attacks You can use a cross site scripting match condition to deploy rules to mitigate these attacks13 The following table provides some common condition configurations: HTTP Request Component to Match Relevant Input Transformations to Apply Justification BODY URL_DECODE HTML_ENTITY_DECODE A very common component to match if your application accepts form input AWS WAF only evaluates the first 8 KB of the body content QUERY_STRING URL_DECODE HTML_ENTITY_DECODE Recommended if query string parameters are reflected back into the webpage An example is the current page number in a paginated list HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE Recommended if your applicatio n uses cookie based parameters that are reflected back on the webpage For example the name of the user who is currently logged in is stored in a cookie and embedded in the page header URI URL_DECODE HTML_ENTITY_DECODE Less common But if your application is using friendly dirified URLs then parameters m ight appear as part of the URL path segment not the query string (they are later rewritten server side) There are similar concerns as with query strings Other Considerations This de tection pattern is less effective if your workload legitimately allows users to compose and submit rich HTML such as the editor of a content management system (CMS)14 For those cases consider narrowly scoping an exception rule that bypasses the XSS rule for specific URL patterns that are known to accept such input as long as there are alternate mechanisms to protect those excluded URLs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 9 Additionally some image or custom data formats and match condition configurations can trigger elevated levels of false positives Patterns that m ight indicate XSS attacks in HTML content can be legitimate in certain image or other data formats For example the SVG graphics format15 also allows a <script> tag You should narrowly tailor XSS rules to the type of request content that’s expected if HTML requests include other data formats A4 – Broken Access Control This category of application flaws new in the proposed 2017 Top 10 covers lack of or improper enforcement of restrictions on what authenticated users are allowed to do It consolidates the following categories from the 2013 Top 10: A4 – Insecure Direct O bject References and A7 – Missing Function Level Access Controls Application flaws in this category allow internal web application objects to be manipulated without the requestor’s access permissions being properly validated 16 Depending on the specific workload this can lead to exposure of unauthorized data manipulation of internal web application state path traversal and file inclusion Your applications s hould properly check and restrict access to individual modules components or functions in accordance with the authorization and authentication scheme used by the application Flaws in function level access controls occur most commonly in applications where access controls were n’t initially designed into the system but were added later17 These flaws also occur in applications that take a perimeter securit y approach to access validation In these cases access level can be validated once at the request initialization level However checks aren’t done further in the execution cycle as various subroutines are invoked This creates an implicit trust that the caller code can invoke other modules components or functions on behalf of the authorized user —which m ight not always hold true If your web application exposes different components to different users based on access level or subscription level then you should have authorization checks performed anytime those functions are invoked Consider the following examples of flawed implementations for illustration: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 10 1 A web application that allow s authenticat ed users to edit their profile generates a link to the profile editor page upon successful authentication: https://examplecom/edit/profile?user_id= 3324 The profile editor page however doesn’t specifically check that the parameter match es the current user This allow s any user who’s logged in to find information about any other user by simply iterating over the pool of user IDs This expos es unauthorized information : https://examplecom/edit/profile?user_id= 3325 2 Another example is a helper server side script that display s or allow s a download of files for a document sharing site It accepts the file name as a query string parameter: https://examplecom/downloadphp?file= mydocumentpdf Somewhere in the script code it passes the parameter to an internal file reading function: $content = file_get_contents(”/documents/path/{$_GET[file]}”); With no validation or sanitization and a vulnerable server configuration the file parameter can be exploited to have the server read and reflect any file For example : https://examplecom/do wnloadphp?file= %2F%2Fetc%2Fpasswd This is an example of both a directory traversal attack18 and a loca l file inclusion attack19 3 Consider a modular web application which is a pattern popular with content management systems to enable extensibility as well as applications using model viewcontroller (MVC) frameworks The entry point into the application is usually a router that invokes the right This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 11 controller based on the request parameters after processing common routines (such as authentication/authorization ): https://examplecom/?module= myprofile &view=display A legitim ate authenticated user invoking the URL above should be able to see their own profile A malicious user m ight authenticate and view their profile as well However they could attempt to alter the request URL and invoke an administrative module: https://e xamplecom/?module= usermanagement &view=display If that particular module doesn’t perform additional checks commensurate with the elevated privileges needed for administrators it enable s an attacker to gain access to unintended parts of the system Using AWS WAF to Mitigate You can use AWS WAF to mitigate certain attack vectors in this category of vulnerabilities Mitigating permission validation flaws is difficult using any WAF This is because the criteria that differentiate good requests from bad reque sts are found in the context of the user (requestor) session and privileges and rarely in the representation of the HTTP request itself However if malicious HTTP requests have a recognizable signature that legitimate requests don’t have you can write rules to match them Also you can use AWS WAF to filter dangerous HTTP request patterns that can indicate path traversal attempts or remote and local file inclusion (RFI/LFI) The table below illustrates a few such generic conditions: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING URL_DECODE HTML_ENTITY_DECODE CONTAINS / :// URI URL_DECODE HTML_ENTITY_DECODE CONTAINS / :// This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 12 Also consider any other components of the HTTP request that your application uses to assemble or refer to file system paths As with the patterns suggested in the previously discussed categories these m ight be less effective if your application legitimately accepts URLs or compl ex file system paths If access to administrative modules components plugins or functions is limited to a known set of privileged users you can limit access to those functions by having them access ed from known source locations a whitelisting pattern: Other Considerations If the authorization claims are transmitted from the client as part of the HTTP request and encapsulated using JWT tokens (or something similar ) you can evaluate and compare them to the requested modules plugins components or functions Consider using AWS Lambda@Edge functions to prevalidate the HTTP requests and ensure that the relevant request parameters match the assertions and authorizations in the token20 You can use Lambda@Edge to reject nonconforming requests before they reach your backend servers A5 – Security Misconfiguration Misconfiguration of server parameters especially ones that have a security impact can happen at any level of your application stack21 This can apply to the operating system middleware platform services web server software application code or database layers of your applicatio n Default configurations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 13 that ship with these components m ight not always follow security recommendations or be fit for every workload A few examples of security misconfigurations are: 1 Leaving the Apache web server ServerTokens Full (default) configuration in a production system This exposes the exact versions of the web server and associated modules in any server generated responses Attackers can use t his information to identify known vulnerabilities in your server software 2 Leaving default directory listings enabled on production web servers This allows malicious users to browse for files that are hosted by the web server 3 Application server configurations that return stack traces to end users on production systems in response to errors A ttackers can potentially discover the software components that are used They might be capable of reverse engineering your code and potentially discovering flaws 4 A previous feature in PHP Several years ago the default configuration for PHP allowed the r egistration of any request parameter (query string cookie based POST based) as a global variable Since then this feature has been deprecated and removed altogether Coupled with a vulnerable version of PHP it allowed for overwriting internal server va riables via HTTP requests: http://examplecom/ ?_SERVER[DOCUMENT_ROOT]=http://badco m/badhtm In a vulnerable application this embeds a malicious site address in the site that users visit Using AWS WAF to Mitigate You can use AWS WAF to mitigate attempts to exploit server misconfigurations in a variety of ways as long as the HTTP request patterns that attempt to exploit them are recognizable These patterns however are also application stack specific They depend on the operating syst em web server frameworks or languages your code leverages Generic rules that m ight not apply to your specific stack can be useful to you for nuisance protection because they block This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 14 requests that would otherwise be invalid so your backend servers don’t have to process them Here are a few strategies you can use : You should block access to t he paths to administrative consoles configuration or status pages that are installed or enabled by default Alternatively you should restrict access to trusted sour ce IP addresses if they’re in use You should do this regardless of whether you specifically disabled or removed them (future actions might reactivate or reinstall them) Protect against known attack patterns that are specific to your platform especially if you have legacy applications that rely on old platform behavior For example if you’re using PHP you might choose to block requests with a query string that contain s “_SERVER[ “ A whitelisting rule pattern similar to the one discussed previously for the Broken Access Control category can help with whitelisting specific subservices such as the administrative console of a WordPress site Other Considerations Also consider deploying Amazon Inspector to verify your software configurations22 It’s an automated security assessment service that helps improve the security and compliance of applications that are deployed on AWS Amazon Inspector automatically assesses applic ations for vulnerabilities or deviations from best practices To help you get started quickly Amazon Inspector includes a knowledge base of hundreds of rules that are mapped to common security best practices and vulnerability definitions Examples of buil tin rules include checking for the enablement of the remote root login or the installation of vulnerable software versions These rules are regularly updated by AWS security researchers In addition to detective controls you can provide the best protection against attacks in this category by implementing and maintaining secure configurations Configuration guidelines such as the CIS Benchmarks23 can help you deplo y secure configurations You can use s ervices such as AWS Config24 and Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 15 EC2 Systems Manager25 to help you track and manage configuration changes over time A6 – Sensitive Data Exposure Sensitive data exposure application flaws are typically harder to mitigate using web application firewalls26 These flaws commonly involve encryption processes that have been deficiently implemented Some examples are the lack of encryption on transport ed or stor ed sensitive data or using vulnerable legacy encryption ciphers 27 where malicious parties can intercept and decode your data Less commonly there can be flaws in application or protocol implementations or client browsers which can also lead to the exposure of sensitive data Exploits that ultimately lead to sensitiv e data exposure can span multiple OWASP categories A security misconfiguration that allows for the use of weak cryptographic algorithms leads to encryption downgrades and ultimately to an attacker being able to captur e the data stream to decode sensitiv e data Using AWS WAF to Mitigate Because the HTTP request is evaluated by AWS WAF after the incoming data stream has been decrypted its rules have no impact on enforcing good encryption hygiene at the connection level Less commonly if HTTP requests that can lead to sensitive data exposure have detectable patterns you can mitigate them by using string match conditions that target those patterns However t hese patterns are application specific and require more in depth knowledge of those applications For example if your application relies heavily on the SHA 1 hashing algorithm 28 malicious users m ight attempt to cause a hash collision using a pair of specially crafted PDF documents29 If your application allows uploads it would be beneficial to set up a rule that block s requests that contain portions of the base64 encoded representation of those files in the body When you attempt to b lock uploaded file signatures using AWS WAF take into account the limits the service imposes on such rules Uploaded data is base64 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 16 encoded Therefore your string match condition values have to be in base64 representation WAF searches the first 8 KB of the HTTP request body or less if the multi part encoding of the request body contains other field parameters that preced e the file data itself The relevant signature of the matched pattern can be up to 50 bytes in size Most standardized file formats al so have uniform preambles so the first several bytes of the file are common to all files of that format This forces you to derive the relevant signature from data further in the file Other Considerations You can use other services in the AWS ecosystem to provide c ontrol over the encryption protocols and ciphers that are used at the connection level: For Elastic Load Balancing Classic Load Balancers 30 you can select predefined or customized security policies 31 These policies specify the protocols and ciphers that the load balancers can use to neg otiate secure connections with clients For Elastic Load Balancing Application Load Balancers 32 you can select from a set of predefined security policies 33 As with the Classic Load Balancers these policies specify the allowed protocols and ciphers For Amazon CloudFront 34 our content delivery network (CDN) service you can configure the minimum SSL protocol version you want to support35 as well as the SSL protocols you want CloudFront to use when it connect s to your custom origins A7 – Insufficient Attack Protection This category has bee n proposed for the new 2017 Top 10 and it reflects the reality that attack patterns can change quickly Malicious actors are able to adapt their toolsets quickly to exploit new vulnerabilities and launch large scale automated attacks to detect vulnerable systems This category focuses strongly on your ability to react in a timely manner to new attack vectors and abnormal request patterns or to application flaws that are discovered A broad range of attack vectors fall into this category with many overlap ping other categories To better understand them ask yourself the following questions: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 17 Can you enforce a certain level of hygiene at the request level? Are there HTTP request components that your application expects to exist or can’t operate without ? Are you able to detect and recognize when your application is targeted with unusual request patterns or high volume? Do you have systems in place that can do that detection in an automated fashion? Are these systems capable of reacting to and blocking such un wanted traffic? Are you able to detect when a malicious actor launches a directed targeted attack against your application trying to find and exploit flaws in your application ? Is this capability automated s o that you can react in near real time? How fast can you deploy a patch to a discovered application flaw or vulnerability in your application stack and mitigate attacks against it? Do you have mechanisms in place to detect the effectiveness of the patch after deployment? Using AWS WAF to Mitigat e You can use AWS WAF to enforce a level of hygiene for inbound HTTP requests Size constraint conditions36 help you build rules that ensure that components of HTTP requests fall within specifically defined ranges You can use them to avoid processing abnormal requests An example is to limit the size of URIs or query strings to values that make sense to the application Also you can use them to require the pre sence of specific headers such as an API key for a RESTful API HTTP Request Component to Match Relevant Input Transformations to Apply Comparison Operator Size URI NONE GT (greater than) Maximum expected URI path size in bytes QUERY_STRING NONE GT Maximum expected size of the query string in bytes BODY NONE GT Maximum expected request body size in bytes HEADER :xapikey NONE LT (less than) 1 (or actual size of the API key) HEADER :cookie NONE GT Maximum expected cookie size in bytes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 18 You can use t he example conditions described in this section with a blacklisting rule to reject requests that do n’t conform to the limits For detecting abnormal request patterns you can use AWS WAF’s ratebased rules that trigger when the rate of requests from an IP address exceeds your defined threshold (request s per 5min interval ) You can combine t hese rules with other predicates (conditions) that are available in AWS WAF For example you can combine a ratebased rule with a string match rule to only count requ ests with a particular user agent (say user agent =”abc”) This rule combination makes sure that only requests with user agent=”abc” are counted towards the determination of the rate violation by that IP address A key advantage of AWS WAF is its programmability You can configure and modify AWS WAF web access control lists (ACLs) rules and conditions by using a programmatic API at any time Any changes normally take effect within a minute even for our global se rvice that’s integrated with Amazon CloudFront Using the API you can build automated processes that are able to react to application specific abnormal conditions and take actions to block suspicious sources of traffic or notify operators for further inve stigation These automations can operate in real time invoked via trap or honeypot URL paths They can also be reactive based on the analysis and correlation of application log files and request patterns As mentioned earlier AWS provides a set of capab ilities called the AWS WAF Security Automations 37 These tools build upon the patterns highlighted previously They use several other AWS services most notably AWS Lambda for event driven computing and provide the following capabilities:38 Scanner and probe mitigation Maliciou s sources scan and probe internet facing web applications for vulnerabilities They send a series of requests that generate HTTP 4xx error codes You can use this history to help identify and block IP addresses from malicious sources This solution creates an AWS Lambda function that automatically parses access logs counts the number of bad requests from unique source IP addresses and updates AWS WAF to block further scans from those addresses This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 19 Known attacker origin mitigation A number o f organizations maintain reputation lists of IP addresses that are operated by known attackers such as spammers malware distributors and botnets This solution leverages the information in these reputation lists to help you block requests from malicious IP addresses Bots and scraper mitigation Operators of publicly accessible web applications have to trust that the clients accessing their content identify themselves accurately and that they will use services as they’re intended However some automate d clients such as content scrapers or bad bots misrepresent themselves to bypass restrictions This solution implements a honeypot that helps you identify and block bad bots and scrapers In this solution the honeypot URL is listed in the ‘disallow’ se ction of the robotstxt file39 Any IP that access es this URL is therefore deemed malicious or noncompliant and is blacklisted Additionally there are ways you m ight be able to use AWS WAF to mitigate newly discovered application flaws or vulnerabilities in your stack They are discussed in greater detail later (see A9 – Using Components with Known Vulnerabilities ) A8 – Cross Site Request Forgery (CSRF) Cross site request forgery attacks predominantly target state changing functions in your web applications40 Consider any URL path and HT TP request that is intended to cause a state change ( for example form submission requests) Are there any mechanisms in place to ensure the user intended to take that action ? Without such mechanisms there isn’t an effective way to determine whether the r equest is legitimate and wasn’t forged by a malicious party Depending solely on client side attributes such as session tokens or source IP addresses isn’t an effective strategy because malicious actors can manipulate and replicate these values CSRF att acks take advantage of the fact that all details of a particular action are predictable (form fields query string parameters) Attacks are carried out in a way that take s advantage of other vulnerabilities such as cross site scripting or file inclusion —so users aren’t aware that the malicious action is triggered using their credentials and active session This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 20 Using AWS WAF to Mitigate You can mitigate CSRF attacks by doing the following : Including unpredictable tokens in the HTTP request that triggers the action Prompting users to authenticate for sending action requests Introducing CAPTCHA challenges for sending action requests41 The first option is transparent to end users —forms can include unique tokens as hidden form fields custom headers or less desirably query string parameters The latter two options can introduce extra friction for end users and are generally only implemented for sensitive action requests Additionally CAPTCHAs can be circumvented by motivated actors and value combinations can also repeat42 As such they are a less desirable mitigation control f or CSRF You can use AWS WAF to check for the presence of those unique tokens For example if you decide to leverage a random universally unique identifier (UUIDv4)43 as the CSRF token and expect the value in a custom HTTP header named xcsrftoken you c an implement a size constraint condition : HTTP Request Component to Match Relevant Input Transformations to Apply Comparison Operator Size HEADER :xcsrftoken NONE EQ (equal to) 36 (bytes/ASCII characters canonical format) You would build a blocking rule where requests do not match this condition (negated) You can further narrow the scope of the rule by only matching POST HTTP requests for example Build a rule using the negated condition above and an additional string match condition for: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against METHOD LOWERCASE EXACTLY post This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 21 Other Considerations Such rules are effective in filtering out CSRF attacks that circumvent your unique tokens However they are n’t effective at validating if the request carries invalid wrong stale or stolen tokens This is because HTTP request introspection lacks access to your application context Therefore you need a server side mechanism in your application to track the expected token or and ensure it’s used exactly once As an example the server sends a simple form to the client browser along with the embedded unique token as a hidden field At the same time it retains in the current server side session store the token value it expects the browser to supply when the user submits the form After the user submits the form a POST request is made to the s erver that includes the unique hidden token The server can safely discard any POST requests that don’t contain the expected value for the supplied session It should clear the value from the session store after it’s used up which ensur es that the value doesn’t get reused A9 – Using Components with Known Vulnerabilities Currently most web applications are highly composed They use frameworks and libraries from a variety of sources commercial or open source One challenge is keeping up to date with the m ost recent versions of these components This is further exacerbated when underlying libraries and frameworks use other components themselves Using components with known vulnerabilities is one of the most prevalent attack vectors44 They can help open up the attack surface of your web application to some of the other attack vectors discussed in this document The decision to use such components can be an active trade off to maintain compatibility with legacy code Or it’s possible to inadvertently use vulnerable components if you’re using components that depend on vulnerable subcomponents Mitigating vulnerabilities in such components is challenging be cause not all of them are reported and tracked by central clearinghouses such as Common Vulnerabilities and Exposures (CVE) 45 This puts the responsibility on the application developers to track the status of the components individually with the respective vendor author or provider Often vulnerabilities are addressed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 22 in new versions of the components including new enhancements rather than fixing existing versions This a dds to the amount of work that developers have to perform to implement test and deploy the new versions of these components Using AWS WAF to Mitigate The primary mechanism to mitigate known vulnerabilities in components is to have a comprehensive proces s in place that addresses the lifecycle of such components You should h ave a way to identify and track the dependencies of your application and the dependencies of the underlying components Also you should have a monitoring process in place to track the security of these components Establish a software development process and policy that factors in the patch or release frequency of underlying component s and acceptable licensing models This can help you react quickly when component providers address vulnerabilities in their code Additionally you can use AWS WAF to filter and block HTTP requests to functionality of such components that you are n’t using in your applications This helps reduce the attack surface of those components if vulnerabilities are discovered in functionality you’re not using Does your application use server side included components? These are usually files that contain code that is loaded at runtime to assemble the HTTP response directly or indirectly Examples are Apache Server side Includes46 or code that load s via PHP include47 or require48 statements Other languages and frameworks have similar constructs It’s a best practice that these components are n’t deployed in the public web path on your web server in the first place However sometimes this recommendation is ignored for a variety o f reasons If the se components are present in the public web path these files aren’t designed to be accessed directly Nevertheless accessing them m ight expose internal application information or provide vectors of attack Consider using a string match condition to block access to such URL prefixes: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 23 HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE STARTS_WITH /includes/ (or relevant prefix in your application) Similarly if your application uses third party components but uses only a subset of the functionality consider blocking exposed URL paths to functionality in those components that you don’t use by using similar AWS WAF conditions Other Considerations Penetration testing can also be an effective mechanism to discover vulnerabilities49 You can integrate it into your deployment and testing processes to both detect potential vulnerabilities as well as to ensure that deployed patches correctly mitigate the targeted application flaws The AWS Marketplace50 offers a wide range of vulnerability testing solutions from our partner vendors that are designed to help you get started easily and quickly Keep in mind that AWS requires customers to obtain permission51 before conducting such tests on resources that are hosted in AWS However some of the solutions available in the AWS Marketplace have been preauthorized and you can skip the authorization step They are marked as such in the solution t itle A10 – Underprotected APIs Another new category proposed for the 2017 Top 10 Underprotected APIs focuses on the target of potential attacks rather than the specific application flaw patterns that can be exploited This category recognizes the preva lence and anticipated future growth of APIs Currently entire applications are published that don’t have a user facing UI Instead they ’re available as APIs that other application publishers can use to build loosely coupled applications Many application s can have both user UIs and APIs whether those APIs are intended to be consumed by third parties or not This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 24 The attack vectors are often the same as discussed in categories A1 through A9 and are common with more traditional web applications that are end us er facing However because APIs are designed for programmatic access they do provide some additional challenges around security testing It’s easier to develop security test cases for u serfacing UIs that have simpler data structures and more discrete high delay steps due to human interaction In contrast APIs are often designed to work with more complex data structures and use a wider range of request frequencies and input values This is the case even if they ’re standardized and use wellknown protoc ols such as RESTful APIs52 or SOAP 53 Using AWS WAF to Mitigate Because the attack vectors for APIs are often the same as for traditional web applications the mitigation mechanisms discussed throughout this document also apply to APIs in a similar manner You can use AWS WAF in a variety of ways to mitigate these different attack vectors A key component that needs hardening is th e protocol parser itself With standardized protocols it ’s relatively easy to extrapolate the parser used With SOAP you use XML54—and with RESTful APIs you will likely use JSON 55 although you can also use XML YAML 56 or other formats Thus you can provide a critical success factor by effectively securing the configuration of the parser component and ensuring any vulnerabilities are mitigated As specific input patterns are discovered that would attempt to exploit flaws in the parser you m ight be able to use AWS WAF string match conditions or size restrictions for the request body to block such request patterns Old Top 2013 A10 – Unvalidated Red irects and Forwards Most websites and web applications contain mechanisms to redirect or forward users to other pages —internal or partner sites If these mechanisms don't validate the redirect or forward requests 57 it’s possible for malicious parties to use your legitimate domain to direct users to unwanted destinations These links use your legitimate and reputable domain to trick users This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 25 Consider the followin g example: You run a video sharing site and operate a URL shortener mechanism to enable users to share videos over text messages on mobile devices You use a script to create the URLs: https://examplecom/link?target= https%3A%2F%2Fexamplecom%2Fvideo%2Fe 439853%3Fpos%3D200%2 6mode%3Dfullscreen Users receive a URL like below and it takes them to the correct content page: https://examplecom/to? vrejkR6T If your link generator script doesn’t validate the acceptable input domains for the target page a malicious user can generate a link to an unwanted site: https://examplecom/link?target= https%3A%2F%2Fbadsitecom%2Fmalware They can then package it and send it to users as it would originate from your site: https://examplecom/to? br09FtZ1 Using AWS WA F to Mitigate The first step in mitigation is understanding where redirects and forwards occur in your application Discovering what URL request patterns cause redirects directly or indirectly and under what conditions helps you to build a list of poten tially vulnerable areas You should perform t he same analysis for any exposed third party components that your application uses in case they include redirect functionality If redirects and forwards are generated in response to HTTP requests from end users as in the example above then you can use AWS WAF to filter the requests and maintain a whitelist of domains that are trusted for redirect/forwarding This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 26 purposes You can use a string match condition that target s the HTTP request component where the target parameter is expected to match a whitelist In the example above the set of conditions might look like the following : 1 Whitelist of allowed domains for redirects (block requests if no list value is matched): HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING URL_DECODE CONTAINS target=https://examplecom QUERY_STRING URL_DECODE CONTAINS target=https://partnersitecom 2 Match only specific HTTP requests (to the redirector or router scripts): HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE STARTS_WITH /link You should combine these conditions in a single AWS WAF rule which ensur es that both conditions have to be met for requests to be matched Companion CloudFormation Template We’ve prepared a n AWS CloudFormation template58 that contains a web ACL and the condition types and rules recommended in this document You can use the template to provision these resources with just a few clicks (full API support is also available) Note that the template is designed as a starting poi nt for you to build upon —and not as a production ready comprehensive set of rules For more information about working with CloudFormation templates see Learn Template Basics 59 The template is available at: https://s3us east2amazonawscom/awswaf owasp/owasp_10_baseyml The following example rules are included in the template: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 27 Bad sources of traffic A generic IP block list rule that allows you to block requests from identified bad sources of traffic Broken access control : o A path traversal and file injection rule that detects common file system path traversal as well as local and remote file injection (LFI/RFI) patterns to block suspicious requests o A privileged module access restriction rule that limits access for administrative modules to known source IPs only You can configure one path prefix and source IP address through the template You can add additional patterns later by changing the conditions directly For more information see Creating and Configuring a Web Access Cont rol List 60 Broken authentication and session management A block list that allows you to block illicit requests that use stolen or hijacked authorization credentials such as JSON Web Tokens or session IDs Cross site request forgery (CSRF) A rule that e nforces the existence of CSRF mitigating tokens Cross site scripting (XSS) A rule that mitigates XSS attacks in common HTTP request components Injection A SQL injection rule that mitigates SQL injection attacks in common HTTP request components Insufficient attack protection A request size hygiene rule that allows you to configure the maximum size of various HTTP request components by using template parameters and block abnormal requests that exceed those maximum sizes Security misconfiguratio ns A rule that detects some exploits of PHP specific server misconfigurations This rule m ight be less effective if you aren’t running PHP based applications but it can still be valuable to filter out unwanted automated HTTP requests that probe for PHP vulnerabilities The use of components with known vulnerabilities A rule that restrict s access to publicly exposed URL paths that should n’t be directly accessible such as server side include components or component features that aren’t being used by your application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 28 We’ve chosen to package the example AWS WAF rule set as a CloudFormation template because it provides an easy and repeatable way to provision the whole rule set with a few simple clicks The AWS CloudFormation documentation provides an easy tofollow walkthrough about how to create a stack 61 which is a collection of resources you can manage as a single unit Follow those instructions an d provide the template on the Select Template page Choose the option to Upload a template to Amazon S3 and provide the downloaded template from your local computer Otherwise you can simply paste the template URL ( https://s3us east2amazonawscom/awswaf owasp/owasp_10_baseyml ) in the Specify an Amazon S3 template URL box On the Specify Details page you can configure the template’s parameters A few key parameters to emphasize are: Apply to WAF This parameter a llows you to select whether you want to use the template to deploy a rule set for Amazon CloudFront web distributions or Application Load Balancers (ALB) in the current region AWS WAF web ACLs get applied either to CloudFront web distributions or ALBs depending on which service you use to deliver your application The same stack can ’t be used for both but you can deploy multiple stacks You can also change this parameter’s value later by updating the stack Rule effect This parameter determines the effect of you r rule set To minimize disruption we recommend that you start with a rule set that counts matching requests You can measure the effectiveness of your rules that wa y without impacting traffic When you ’re confident about the effectiveness of your rules you can deploy a stack that will block matching requests Continue following the AWS CloudFormation walkthrough instructions to deploy the stack After you deploy the stack you must associate the web ACL62 that’s deployed by the stack with your load balancer or web distribution resources to be able to use the rule set This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 29 Conclusion You can use AWS WAF to help you protect your websites and web applications against various attack vectors at the HTTP pro tocol level As we discussed in relation to OWASP security flaws AWS WAF is very effective at mitigating vulnerabilities to the extent that you can detect these attack patterns in HTTP requests Additionally you can enhance the capabilities of AWS WAF with other AWS services to build comprehensive security automations A set of such tools is available on our website in the form of the AWS WAF Security Automations 63 These tools enable you to build a set of protections that can react to the changing type of attacks your applications m ight be facing The solution provides several easy todeploy automations in the form of a CloudFormation template for rate based IP black listing reputation list IP blacklisting scanner and probe mitigation bot and scraper detection and blocking This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 30 Contributors The following individuals and organizations contributed to this document: Vlad Vlasceanu Sr Solutions Architect Amazon Web Services Sundar Jayashekar Sr Product Manager Amazon Web Services William Reid Sr Manager Amazon Web Services Stephen Quigg Solutions Architect Amazon Web Services Matt Nowina Solutions Architect Amazon Web Services Matt Bretan Sr Consultant A mazon Web Services Enrico Massi Security Solutions Architect Amazon Web Services Michael StOnge Cloud Security Architect Amazon Web Services Leandro Bennaton Security Solutions Architect Amazon Web Services Further Reading For additional informatio n see the following: AWS WAF Security Automations: https://awsamazoncom/answers/security/aws wafsecurity automations/ OWASP Top 10 – 2017 rc1: https://githubcom/OWASP/Top10/raw/master/2017/OWASP%20Top %2010%20 %202017%20RC1 Englishpdf OWASP Top 10 – 2013 : https://wwwowasporg/indexphp/Top_10_2013 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 31 Document Revisions Date Description July 2017 First publication 1 https://wwwowasporg/ 2 https://wwwowasporg/indexphp/Category:OWASP_Top_Ten_Project 3 https://awsamazoncom/waf/ 4 https://awsamazoncom/cloudfront/ 5 https://awsamazoncom/elasticloadbalancing/applicationloadbalancer/ 6 https://wwwowasporg/indexphp/Top_10_2013 A1Injection 7 http://docsawsamazoncom/waf/latest/developerguide/web aclsql conditionshtml 8 http://docsawsamazoncom/w af/latest/developerguide/web aclstring conditionshtml 9 https://wwwowasporg/indexphp/Top_10_2013 A2 Broken_Authentication_and_Session_Managemen t 10 https://jwtio/ 11 http://docsawsamazoncom/waf/latest/APIReference/Welcomehtml 12 https://wwwowasporg/indexphp/Top_10_2013 A3Cross Site_Scripting_(XSS) 13 http://docsawsamazoncom/waf/latest/developergu ide/web aclxss conditionshtml 14 https://enwikipediaorg/wiki/Content_management_system 15 https://developermozillaorg/en US/docs/Web/SVG 16 https://wwwowasporg/indexphp/Top_10_2013 A4 Insecure_Direct_Object_References Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 32 17 https://wwwowasporg/indexphp/Top_10_2013 A7 Missing_Function_Level_Access_Control 18 https://enwikipediaorg/wiki/Directory_traversal_attack 19 https://enwikipediaorg/wiki/File_inclusion_vulnerability 20 http://docsawsamazoncom/lambda/latest/dg/lambda edgehtml 21 https://wwwowasporg/indexphp/Top_10_2013 A5 Security_Misconfiguration 22 https://awsamazoncom/inspector/ 23 https://wwwcisecurityorg/cis benchmarks/ 24 https://awsamazoncom/config / 25 https://awsamazoncom/ec2/systems manager/ 26 https://wwwowasporg/indexphp/Top_10_2013 A6 Sensitive_Da ta_Exposure 27 https://enwikipediaorg/wiki/Cipher 28 https://enwikipediaorg/wiki/SHA 1 29 https://shatteredio/ 30 http://docsawsamazoncom/elasticloadbalancing/latest/classic/introduction html 31 http://docsawsamazoncom/elasticloadbalancing/latest/classic/elb ssl security policyhtml 32 http://docsawsamazoncom/ elasticloadbalancing/latest/application/introdu ctionhtml 33 http://docsawsamazoncom/elasticloadbalancing/latest/application/create https listen erhtml 34 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Int roductionhtml 35 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/dis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 33 tribution webvalues specifyhtml#DownloadDistValuesMinimumSSLProtocolVersion 36 http://docsawsamazoncom/waf/latest/developerguide/web aclsize conditionshtml 37 https://awsamazoncom/answers/security/aws wafsecurity automations/ 38 https://awsamazoncom/lambda/ 39 https://enwikipediao rg/wiki/Robots_exclusion_standard 40 https://wwwowasporg/indexphp/Top_10_2013 A8Cross Site_Request_Forgery_(CSRF) 41 https://enwikipediaorg/wiki/CAPTCHA 42 https://enwikipediaorg/wiki/CAPTCHA#Circumvention 43 https://enwikipediaorg/wiki/Universally_unique_identifier 44 https://wwwo wasporg/indexphp/Top_10_2013 A9 Using_Components_with_Known_Vulnerabilities 45 http://cvemitreorg/ 46 https://httpdapacheorg/docs/current/howto/s sihtml 47 http://phpnet/manual/en/functionincludephp 48 http://phpnet/manual/en/functionrequirephp 49 https://enwikipediaorg/wiki/Penetration_test 50 https://awsamazoncom/marketplace/ search/results?x=0&y=0&searchTerm s=vulnerability+scanner&page=1&ref_=nav_search_box 51 https://awsamazoncom/security/penetration testing/ 52 https://enwikipediaorg/wiki/Representational_state_transfer 53 https://enwikipediaorg/wiki/SOAP 54 https://wwww3org/XML/ 55 http://wwwjsonorg/ 56 http://yamlorg/ 57 https://wwwowasporg/indexphp/Top_10_2013 A10 Unvalidated_Redirects_and_Forwards This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 34 58 https://awsamazoncom/cloudformation/ 59 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/getting startedtemplatebasicshtml 60 http://docsawsamazon com/waf/latest/developerguide/web aclhtml 61 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cfn console create stackhtml 62 http://docsawsamazoncom/waf/latest/developerguide/web aclworking withhtml#web aclassociating cloudfront distrib ution 63 https://awsamazoncom/answers/security/aws wafsecurity automations/
|
General
|
consultant
|
Best Practices
|
Using_AWS_for_Disaster_Recovery
|
ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 1 of 22 Using Amazon Web Services for Disaster Recovery October 2014 Glen Robinson Attila Narin and Chris Elleman This paper has been archived For the latest information on disaster recovery see https://awsamazoncom/disasterrecovery/ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 2 of 22 Contents Introduction 3 Recovery Time Objective and Recovery Point Objective 4 Traditional DR Investment Practices 4 AWS Services and Features Essential for Disaster Recovery 5 Example Disaster Recovery Scenarios with AWS 9 Backup and Restore 9 Pilot Light for Quick Recovery into AWS 11 Warm Standby Solution in AWS 14 MultiSite Solution Deployed on AWS and OnSite 16 AWS Production to an AWS DR Solution Using Multiple AWS Regions 18 Replication of Data 18 Failing Back from a Disaster 19 Improving Your DR Plan 20 Software Licensing and DR 21 Conclusion 21 Further Reading 22 Document Revisions 22 ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 3 of 22 Abstract In the event of a disaster you can quickly launch resources in Amazon Web Services (AWS) to ensure business continuity Th is whitepaper highlights AWS services and features that you can leverage for your disaster recovery (DR) processes to significantly minimize the impact on your data your system and your overall business operations The whitepaper also includes scenarios that show you step bystep how to improve your DR plan and leverage the full potential of the AWS cloud for disaster recovery Introduction Disaster recovery (DR) is about preparing for and recovering from a disaster Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster This includes hardware or software failure a network outage a power outage physical damage to a building like fire or flooding human error or some other significant event To minimize the impact of a disaster companies invest time and resources to plan and prepare to train employees and to document and update processes The amount of investment for DR planning for a particular system can vary dramatically depending on the cost of a potential outage Companies that have traditional physical environments typically must duplicate their infrastructure to ensure the availability of spare capacity in the event of a disaster The infrastructure needs to be procured installed and maintained so that it is ready to support the anticipated capacity requirements During normal operations the infrastructure typically is underutilized or overprovisioned With Amazon Web Services (AWS) your company can scale up its infrastructure on an asneeded pay asyougo basis You get access to the same highly secure reliable and fast infrastructure that Amazon uses to run its own global network of websites AWS also gives you the flexibility to quickly change and optimize resources during a DR event which can result in significant cost savings This whitepaper outlines best practices to improve your DR processes from minimal investments to full scale availability and fault tolerance and shows you how you can use AWS services to reduce cost and ensure business continuity during a DR event ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 4 of 22 Recovery Time Objective and Recovery Point Objective This whitepaper uses two common industry terms for disaster planning: Recovery time objective (RTO)1 — The time it takes after a disruption to restore a business process to its service level as defined by the operational level agreement (OLA) For example if a disaster occurs at 12:00 PM (noon) and the RTO is eight hours the DR process should restore the business process to the acceptable service level by 8:00 PM Recovery point objective (RPO)2 — The acceptable amount of data loss measured in time For example if a disaster occurs at 12:00 PM (noon) and the RPO is one hour the system should recover all data that was in the system before 11:00 AM Data loss will span only one hour between 11:00 AM and 12:00 PM (noon) A company typically decides on an acceptable RTO and RPO based on the financial impact to the business when systems are unavailable The company determines financial impact by considering many factors such as the loss of business and damage to its reputation due to downtime and the lack of systems availability IT organizations then plan solutions to provide costeffective system recovery based on the RPO within the timeline and the service level established by the RTO Traditional DR Investment Practices A traditional approach to DR involves different levels of offsite duplication of data and infrastructure Critical business services are set up and maintained on this infrastructure and tested at regular intervals The disaster recovery environment’s location and the source infrastructure should be a significant physical distance apart to ensure that the disaster recovery environment is isolated from faults that could impact the source site At a minimum the infrastructure that is required to support the duplicate environment should include the following: Facilities to house the infrastructure including power and cooling Security to ensure the physical protection of assets Suitable capacity to scale the environment Support for repairing replacing and refreshing the infrastructure Contractual agreements with an Internet service provider (ISP) to provide Internet connectivity that can sustain bandwidth utilization for the environment under a full load Network infrastructure such as firewalls routers switches and load balancers Enough server capacity to run all missioncritical services including storage appliances for the supporting data and servers to run applications and backend services such as user authentication Domain Name System (DNS) Dynamic Host Configuration Protocol (DHCP) monitoring and alerting 1 From http://enwikipediaorg/wiki/Recovery_time_objective 2 From http://enwikipediaorg/wiki/Recovery_point_objective ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 5 of 22 AWS Services and Features Essential for Disaster Recovery Before we discuss the various approaches to DR it is important to review the AWS services and features that are the most relevant to disaster recovery This section provides a summary In the preparation phase of DR it is important to consider the use of services and features that support data migration and durable storage because they enable you to restore backedup critical data to AWS when disaster strikes For some of the scenarios that involve either a scaleddown or a fully scaled deployment of your system in AWS compute resources will be required as well When reacting to a disaster it is essential to either quickly commission compute resources to run your system in AWS or to orchestrate the failover to already running resources in AWS The essential infrastructure pieces include DNS networking features and various Amazon Elastic Compute Cloud (Amazon EC2) features described later in this section Regions Amazon Web Services are available in multiple regions around the globe so you can choose the most appropriate location for your DR site in addition to the site where your system is fully deployed AWS has multiple general purpose regions in the Americas EMEA and Asia Pacific that anyone with an AWS accou nt can access Specialuse regions are also available for government agencies and for China See the full list of available regions here Storage Amazon Simple Storage Service (Amazon S3) provides a highly durable storage infrastructure designed for mission critical and primary data storage Objects are redundantly stored on multiple devices across multiple facilities within a region designed to provide a durability of 99999999999% (11 9s) AWS provides further protection for data retention and archiving through versioning in Amazon S3 AWS multifactor authentication (AWS MFA) bucket policies and AWS Identity and Access Management (IAM) Amazon Glacier provides extremely lowcost storage for data archiving and backup Objects (or archives as they are known in Amazon Glacier) are optimized for infrequent access for which retrieval times of several hours are adequate Amazon Glacier is designed for the same durability as Amazon S3 Amazon Elastic Block Store (Amazon EBS) provides the ability to create point intime snapshots of data volumes You can use the snapshots as the starting point for new Amazon EBS volumes and you can protect your data for longterm durability because snapshots are stored within Amazon S3 After a volume is created you can attach it to a running Amazon EC2 instance Amazon EBS volumes provide offinstance storage that persists independently from the life of an instance and is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component AWS Import/Export accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport AWS Import/Export bypasses the Internet and transfers your data directly onto and off of storage devices by means of the highspeed internal network of Amazon For data sets of significant size AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity You can use AWS Import/Export to migrate data into and out of Amazon S3 buckets and Amazon Glacier vaults or into Amazon EBS snapshots AWS Storage Gateway is a service that connects an onpremises software appliance with cloudbased storage to provide seamless and highly secure integration between your onpremises IT environment and the storage infrastructure of AWS ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 6 of 22 AWS Storage Gateway supports three different configurations: Gatewaycached volumes — You can store your primary data in Amazon S3 and retain your frequently accessed data locally Gatewaycached volumes provide substantial cost savings on primary storage minimize the need to scale your storage onpremises and retain lowlatency access to your frequently accessed data Gateway stored volumes — In the event that you need lowlatency access to your entire data set you can configure your gateway to store your primary data locally and asynchronously back up point intime snapshots of this data to Amazon S3 Gatewaystored volumes provide durable and inexpensive offsite backups that you can recover locally or from Amazon EC2 if for example you need replacement capacity for disaster recovery Gatewayvirtual tape library (gatewayVTL) — With gatewayVTL you can have an almost limitless collection of virtual tapes You can store each virtual tape in a virtual tape library (VTL) backed by Amazon S3 or a virtual tape shelf (VTS) backed by Amazon Glacier The virtual tape library exposes an industry standard iSCSI interface that provides your backup application with online access to the virtual tapes When you no longer require immediate or frequent access to data contained on a virtual tape you can use your backup application to move it from its VTL to your V TS to further reduce your storage costs Compute Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud Within minutes you can create Amazon EC2 instances which are virtual machines over which you have complete control In the context of DR the ability to rapidly create virtual machines that you can control is critical To describe every feature of Amazon EC2 is outside the scope of this document; instead; we focus on the aspects of Amazon EC2 that are most relevant to DR Amazon Machine Images (AMIs) are preconfigured with operating systems and some preconfigured AMIs might also include application stacks You can also configure your own AMIs In the context of DR we strongly recommend that you configure and identify your own AMIs so that they can launch as part of your recovery procedure Such AMIs should be preconfigured with your operating system of choice plus appropriate pieces of the application stack Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones They also provide inexpensive lowlatency network connectivity to other Availability Zones in the same region By launching instances in separate Availability Zones you can protect your applications from the failure of a single location Regions consist of one or more Availability Zones The Amazon EC2 VM Import Connector virtual appliance enables you to import virtual machine images from your existing environment to Amazon EC2 instances Networking When you are dealing with a disaster it’s very likely that you will have to modify network settings as you r system is failing over to another site AWS offers several services and features that enable you to manage and modify network settings Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service It gives developers and businesses a reliable costeffective way to route users to Internet applications Amazon Route 53 includes a number of global loadbalancing capabilities (which can be effective when you are dealing with DR scenarios such as DNS endpoint health checks) and the ability to failover between multiple endpoints and even static websites hosted in Amazon S3 Elastic IP addresses are static IP addresses designed for dynamic cloud computing However unlike traditional static IP addresses Elastic IP addresses enable you to mask instance or Availability Zone failures by programmatically remapping ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 7 of 22 your public IP addresses to instances in your account in a particular region For DR you can also pre allocate some IP addresses for the most critical systems so that their IP addresses are already known before disaster strikes This can simplify the execution of the DR plan Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances It enables you to achieve even greater fault tolerance in your applications by seamlessly providing the loadbalancing capacity that is needed in response to incoming application traffic Just as you can preallocate Elastic IP addresses you can preallocate your load balancer so that its DNS name is already known which can simplify the execution of your DR plan Amazon Virtual Private Cloud (Amazon VPC) lets you provision a private isolated section of the A WS cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways This enables you to create a VPN connection between your corporate data center and your VPC and leverage the AWS cloud as an extension of your corporate data center In the context of DR you can use Amazon VPC to extend your existing network topology to the cloud; this can be especially appropriate when recovering enterprise applications that are typically on the internal network Amazon Direct Connect makes it easy to set up a dedicated network connection from your premises to AWS In many cases this can reduce your network costs increase bandwidth throughput and provide a more consistent n etwork experience than Internetbased connections Databases For your database needs consider using these AWS services: Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a relational database in the cloud You c an use Amazon RDS either in the preparation phase for DR to hold your critical data in a database that is already running or in the recovery phase to run your production database When you want to look at multiple regions Amazon RDS gives you the ability to snapshot data from one region to another and also to have a read replica running in another region Amazon DynamoDB is a fast fully managed NoSQL database service that makes it simple and costeffective to store and retrieve any amount of data and serve any level of request traffic It has reliable throughput and singledigit millisecond latency You can also use it in the preparation phase to copy data to DynamoDB in another region or to Amazon S3 During the recovery phase of DR you can scale up seamlessly in a matter of minutes with a single click or API call Amazon Redshift is a fast fully managed petabytescale data warehouse service that makes it simple and costeffective to efficiently analyze all your data using your existing business intelligence tools You can use Amazon Redshift in the preparation phase to snapshot your data warehouse to be durably stored in Amazon S3 within the same region or copied to another region During the recovery phase of DR you can quickly restore your data warehouse into the same region or within another AWS region You can also install and run your choice of database software on Amazon EC2 and you can choose from a variety of leading database systems For more information about database options on AWS see Running Databases on AWS ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 8 of 22 Deployment orchestration Deployment automation and poststartup software installation/configuration processes and tools can be used in Amazon EC2 We highly recommend investments in this area This can be very helpful in the recovery phase enabling you to create the required set of resources in an automated way AWS CloudFormation gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion You can create templates for your environments and deploy associated collections of resources (called a stack) as needed AWS Elastic Beanstalk is an easy touse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby and Docker You can deploy your application code and AWS Elastic Beanstalk will provision the operating environment for your applications AWS OpsWorks is an application management service that makes it easy to deploy and operate applications of all types and sizes You can define your environment as a series of layers and configure each layer as a tier of your application AWS OpsWorks has automatic host replacement so in the event of an instance failure it will be automatically replaced You can use AWS OpsWorks in the preparation phase to template your environment and you can combine it with AWS CloudFormation in the recovery phase You can quickly provision a new stack from the stored configuration that supports the defined RTO Security and complian ce There are many securityrelated features across the AWS services We recommend that you review the Security Best Practices whitepaper AWS also provides further risk and compliance information in the AWS Security Center A full discussion of security is out of scope for this paper ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 9 of 22 Example Disaster Recovery Scenarios with AWS This section outlines four DR scenarios that highlight the use of AWS and compare AWS with traditional DR methods The following figure shows a spectrum for the four scenarios arranged by how quickly a system can be available to users after a DR event Figure 1: Spectrum of Disaster Recovery Options AWS enables you to costeffectively operate each of these DR strategies It’s important to note that these are just examples of possible approaches and variations and combinations of these are possible If your application is already running on AWS then multiple regions can be employed and the same DR strategies will still apply Backup and Restore In most traditional environments data is backed up to tape and sent offsite regularly If you use this method it can take a long time to restore your system in the event of a disruption or disaster Amazon S3 is an ideal destination for backup data that might be needed quickly to perform a restore Transferring data to and from Amazon S3 is typically done through the network and is therefore accessible from any location There are many commercial and opensource backup solutions that integrate with Amazon S3 You can use AWS Import/Export to transfer very large data sets by shipping storage devices directly to AWS For longerterm data storage where retrieval times of several hours are adequate there is Amazon Glacier which has the same durability model as Amazon S3 Amazon Glacier is a lowcost alternative starting from $001/GB per month Amazon Glacier and Amazon S3 can be used in conjunction to produce a tiered backup solution AWS Storage Gateway enables snapshots of your onpremises data volumes to be transparently copied into Amazon S3 for backup You can subsequently create local volumes or Amazon EBS volumes from these snapshots Storagecached volumes allow you to store your primary data in Amazon S3 but keep your frequently accessed data local for low latency access As with AWS Storage Gateway you can snapshot the data volumes to give highly durable backup In the event of DR you can restore the cache volumes either to a second site running a storage cache gateway or to Amazon EC2 You can use the gatewayVTL configuration of AWS Storage Gateway as a backup target for your existing backup management software This can be used as a replacement for traditional magnetic tape backup For systems running on AWS you also can back up into Amazon S3 Snapshots of Amazon EBS volumes Amazon RDS databases and Amazon Redshift data warehouses can be stored in Amazon S3 Alternatively you can copy files directly into Amazon S3 or you can choose to create backup files and copy those to Amazon S3 There are many backup solutions that store data directly in Amazon S3 and these can be used from Amazon EC2 systems as well ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 10 of 22 The following figure shows data backup options to Amazon S3 from either onsite infrastructure or from AWS Figure 2: Data Backup Options to Amazon S3 from OnSite Infrastructure or from AWS Of course the backup of your data is only half of the story If disaster strikes you’ll need to recover your data quickly and reliably You should ensure that your systems are configured to retain and secure your data and you should test your data recovery processe s The following diagram shows how you can quickly restore a system from Amazon S3 backups to Amazon EC2 Figure 3 : Restoring a System from Amazon S3 Backups to Amazon EC2 ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 11 of 22 Key steps for backup and restore: 1 Select an appropriate tool or method to back up your data into AWS 2 Ensure that you have an appropriate retention policy for this data 3 Ensure that appropriate security measures are in place for this data including encryption and access policies 4 Regularly test the recovery of this data and the restoration of your system Pilot Light for Quick Recovery into AWS The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud The idea of the pilot light is an analogy that comes from the gas heater In a gas heater a small flame that’s always on can quickly ignite the entire furn ace to heat up a house This scenario is similar to a backupandrestore scenario For example with AWS you can maintain a pilot light by configuring and running the most critical core elements of your system in AWS When the time comes for recovery you can rapidly provision a fullscale production environment around the critical core Infrastructure elements for the pilot light itself typically include your database servers which would replicate data to Amazon EC2 or Amazon RDS Depending on the system there might be other critical data outside of the database that needs to be replicated to AWS This is the critical core of the system (the pilot light) around which all other infrastructure pieces in AWS (the rest of the furnace) can quickly be provisioned to restore the complete system To provision the remainder of the infrastructure to restore businesscritical services you would typically have some pre configured servers bundled as Amazon Machine Images (AMIs) which are ready to be started up at a moment’s notice When starting recovery instances from these AMIs come up quickly with their predefined role (for example Web or App Server) within the deployment around the pilot light From a networking point of view you have two main options for provisioning: Use Elastic IP addresses which can be preallocated and identified in the preparation phase for DR and associate them with your instances Note that for MAC addressbased software licensing you can use elastic network interfaces (ENIs) which have a MAC address that can also be preallocated to provision licenses against You can associate these with your instances just as you would with Elastic IP addresses Use Elastic Load Balancing (ELB) to distribute traffic to multiple instances You would then update your DNS records to point at your Amazon EC2 instance or point to your load balancer using a CNAME We recommend this option for traditional w ebbased applications For less critical systems you can ensure that you have any installation packages and configuration information available in AWS for example in the form of an Amazon EBS snapshot This will speed up the application server setup because you can quickly create multiple volumes in multiple Availability Zones to attach to Amazon EC2 instances You can then install and configure accordingly for example by usi ng the backupandrestore method The pilot light method gives you a quicker recovery time than the backupandrestore method because the core pieces of the system are already running and are continually kept up to date AWS enables you to automate the provisioning and configuration of the infrastructure resources which can be a significant benefit to save time and help protect against human errors However you will still need to perform some installation and configuration tasks to recover the applications fully ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 12 of 22 Preparation phase The following figure shows the preparation phase in which you need to have your regularly changi ng data replicated to the pilot light the small core around which the full environment will be started in the recovery phase Your less frequently updated data such as operating systems and applications can be periodically updated and stored as AMIs Figure 4: The Preparation Phase of the Pilot Light Scenario Key steps for preparation: 1 Set up Amazon EC2 instances to replicate or mirror data 2 Ensure that you have all supporting custom software packages available in AWS 3 Create and maintain AMIs of key servers where fast recovery is required 4 Regularly run these servers test them and apply any software updates and configuration changes 5 Consider automating the provisioning of AWS resources Recovery phase To recover the remainder of the environment around the pilot light you can start your systems from the AMIs within minutes on the appropriate instance types For your dynamic data servers you can resize them to handle production volumes as needed or add capacity accordingly Horizontal scaling often is the most costeffective and scalable approach to add capacity to a system For example you can add more web servers at peak times However you can also choose larger Amazon EC2 instance types and thus scale vertically for applications such as relational databases From a networking perspective any required DNS updates can be done in parallel ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 13 of 22 After recovery you should ensure that redundancy is restored as quickly as possible A failure of your DR environment shortly after your production environment fails is unlikely but you should be aware of this risk Continue to take regular backups of your system and consider additional redundancy at the data layer The following figure shows the recovery phase of the pilot light scenario Figure 5: The Recovery Phase of the Pilot Light Scenario Key steps for recovery: 1 Start your application Amazon EC2 instances from your custom AMIs 2 Resize existing database/data store instances to process the increased traffic 3 Add additional database/data store instances to give the DR site resilience in the data tier; if you are using Amazon RDS turn on MultiAZ to improve resilience 4 Change DNS to point at the Amazon EC2 servers 5 Install and configure any nonAMI based systems ideally in an automated way ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 14 of 22 Warm Standby Solution in AWS The term warm standby is used to describe a DR scenario in which a scaled down version of a fully functional environment is always running in the cloud A warm standby solution exten ds the pilot light elements and preparation It further decreases the recovery time because some services are always running By identifying your business critical systems you can fully duplicate these systems on AWS and have them always on These servers can be running on a minimumsized fleet of Amazon EC2 instances on the smallest sizes possible This solution is not scaled to take a fullproduction load but it is fully functional It can be used for nonproduction work such as testing quality assurance and internal use In a disaster the system is scaled up quickly to handle the production load In AWS this can be done by adding more instances to the load balancer and by resizing the small capacity servers to run on larger Amazon EC2 instance typ es As stated in the preceding section horizontal scaling is preferred over vertical scaling Preparation phase The following figure shows the preparation phase for a warm standby solution in which an onsite solution and an AWS solution run side byside Figure 6: The Preparation Phase of the Warm Standby Scenario ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 15 of 22 Key steps for preparation: 1 Set up Amazon EC2 instances to replicate or mirror data 2 Create and maintain AMIs 3 Run your application using a minimal footprint of Amazon EC2 instances or AWS infrastructure 4 Patch and update software and configuration files in line with your live environment Recovery phase In the case of failure of the production system the standby environment will be scaled up for production load and DNS records will be changed to route all traffic to AWS Figure 7: The Recovery Phase of the Warm Standby Scenario Key steps for recovery: 1 Increase the size of the Amazon EC2 fleets in service with the load balancer (horizontal scaling) 2 Start applications on larger Amazon EC2 instance types as needed (vertical scaling) 3 Either manually change the DNS records or use Amazon Route 53 automated health checks so that all traffic is routed to the AWS environment 4 Consider using Auto Scaling to rightsize the fleet or accommodate the increased load 5 Add resilience or scale up your database ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 16 of 22 MultiSite Solution Deployed on AWS and OnSite A multisite solution runs in AWS as well as on your existing onsite infrastructure in an activeactive configuration The data replication method that you employ will be determined by the recovery point that you choose For more information about recovery point options see the Recovery Time Objective and Recovery Point Objective section in this whitepaper In addition to recovery point options there are various replication methods such as synchronous and asynchronous methods For more information see the Replication of Data section in this whitepaper You can use a DNS service that supports weighted routing such as Amazon Route 53 to route production traffic to different sites that deliver the same application or service A proportion of traffic will go to your infrastructure in AWS and the remainder will go to your onsite infrastructure In an onsite disaster situation you can adjust the DNS weighting and send all traffic to the AWS servers The capacity of the AWS service can be rapidly increased to handle the full production load You can use Amazon EC2 Auto Scaling to automate this process You might need some application logic to detect the failure of the primary database services and cut over to the parallel database services running in AWS The cost of this scenario is determined by how much production traffic is handled by AWS during normal operation In the recovery phase you pay only for what you use for the duration that the DR environment is required at full scale You can further reduce cost by purchasing Amazon EC2 Reserved Instances for your “always on” AWS servers Preparation phase The following figure shows how you can use the weighted routing policy of the Amazon Route 53 DNS to route a portion of your traffic to the AWS site The application on AWS might access data sources in the onsite production system Data is replicated or mirrored to the AWS infrastructure Figure 8 : The Preparation Phase of the MultiSite Scenario ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 17 of 22 Key steps for preparation: 1 Set up your AWS environment to duplicate your production environment 2 Set up DNS weighting or similar traffic routing technology to distribute incoming requests to both sites Configure automated failover to reroute traffic away from the affected site Recovery phase The following figure shows the change in traffic routing in the event of an onsite disaster Traffic is cut over to the AWS infrastructure by updating DNS and all traffic and supporting data queries are supported by the AWS infrastructure Figure 9: The Recovery Phase of the MultiSite Scenario Involving OnSite and AWS Infrastructure Key steps for recovery: 1 Either manually or by using DNS failover change the DNS weighting so that all requests are sent to the AWS site 2 Have application logic for failover to use the local AWS database servers for all queries 3 Consider using Auto Scaling to automatically rightsize the AWS fleet You can further increase the availability of your multisite solution by designing MultiAZ architectures For more information about how to design applications that span multiple availability zones see the Building FaultTolerant Applications on AWS whitepaper ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 18 of 22 AWS Production to an AWS DR Solution Using Multiple AWS Regions Applications deployed on AWS have multisite capability by means of multiple Availability Zones Availability Zones are distinct locations that are engineered to be insulated from each other They provide inexpensive lowlatency network connectivity within the same region Some applications might have an additional requirement to deploy their components using multiple regions; this can be a business or regulatory requirement Any of the preceding scenarios in this whitepaper can be deployed using separate AWS regions The advantages for both production and DR scenarios include the following: You don’t need to negotiate contracts with another provider in another region You can use the same underlying AWS technologies across regions You can use the same tools or APIs For more information see the Migrating AWS Resources to a New Region whitepaper Replication of Data When you replicate data to a remote location you should consider these factors: Distance between the sites — Larger distances typically are subject to more latency or jitter Available bandwidth — The breadth and variability of the interconnections Data rate required by your application — The data rate should be lower than the available bandwidth Replication technology — The replication technology should be parallel (so that it can use the network effectively) There are two main approaches for replicating data: synchronous and asynchronous Synchronous replication Data is atomically updated in multiple locations This puts a dependency on network performance and availability I n AWS Availability Zones within a region are well connected but physically separated For example when deployed in MultiAZ mode Amazon RDS uses synchronous replication to duplicate data in a second Availability Zone This ensures that data is not lost if the primary Availability Zone becomes unavailable Asynchronous replication Data is not atomically updated in multiple locations It is transferred as network performance and availability allows and the appli cation continues to write data that m ight not be fully replicated yet Many database systems support asynchronous data replication The database replica can be located remotely and the replica does not have to be completely synchronized with the primary database server This is acceptable in many scenarios for example as a backup source or reporting/readonly use cases In addition to database systems you can also extend it to network file systems and data volumes We recommend that you understand the replication technology used in your software solution A detailed analysis of replication technology is beyond the scope of this paper ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 19 of 22 AWS regions are completely independent of each other but there are no differences in the way you access them and use them This enables you to create DR processes that span continental distances without the challenges or costs that this would normally incur You can back up data and systems to two or more AWS regions allowing service restoration even in the face of extremely largescale disasters You can use AWS regions to serve your users around the globe with relatively low complexity to your operational processes Failing Back from a Disaster Once you have restored your primary site to a working state you will need to restore your normal service which is often referred to as a “fail back ” Depending on your DR strategy this typically means reversing the flow of data replication so that any data updates received while the primary site was down can be replicated back without the loss of data The following steps outline the different failback approaches: Backup and restore 1 Freeze data changes to the DR site 2 Take a backup 3 Restore the backup to the primary site 4 Repoint users to the primary site 5 Unfreeze the changes Pilot light warm standby and multisite 1 Establish reverse mirroring/replication from the DR site back to the primary site once the primary site has caught up with the changes 2 Freeze data changes to the DR site 3 Repoint users to the primary site 4 Unfreeze the changes ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 20 of 22 Improving Your DR Plan This section describes the important steps you should follow to establish a strong DR plan Testing After your DR solution is in place it needs to be tested You can test frequently which is one of the key advantages of deploying on AWS “Game day” is when you exercise a failover to the DR environment ensuring that sufficient docume ntation is in place to make the process as simple as possible should the real event take place Spinning up a duplicate en vironment for testing your game day scenarios is quick and cost effective on AWS and you typically don’t need to touch your productio n environment You can use AWS CloudFormation to deploy complete environments on AWS This uses a template to describe the AWS resources and any associated dependencies or runtime parameters that are required to create a full environment Differentiating your tests is key to ensuring that you are covered against a multitude of different types of disasters The following are examples of possible gamed ay scenarios: Power loss to a site or a set of servers Loss of ISP connectivity to a single site Virus impacting core business services that affects multisites User error that causes the loss of data requiring a point intime recovery Monitoring and alerting You need to have regular checks and sufficient monitoring in place to alert you when your DR environment has been impacted by server failure connectivity issues and application issues Amazon CloudWatch provides access to metrics about AWS resources as well as custom metrics that can be application –centric or even businesscentric You can set up alarms based on defined thresholds on any of the metrics and where required you can set up Amazon SNS to send alerts in case of unexpected behavior You can use any monitoring solutions on AWS and y ou can also continue to use any existing monitoring and alerting tools that your company uses to monitor your instance metrics as well as guest OS stats and application health Backups After you have switched to your DR environment you should continue to make regular backups Testing backup and restore regularly is essential as a fallback solution AWS gives you the flexibility to perform frequent inexpensive DR tests without needing the DR infrastructure to be “always on” User access You can secure access to resources in your DR environment by using AWS Identity and Access Management (IAM) With IAM you can create rolebased and userbased security policies that segregate user responsibilities and restrict user access to specified resources and tasks in your DR environment ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 21 of 22 System access You can also create roles for your Amazon EC2 resources so that only users who are assigned to specified roles can perform defined actions on your DR environment such as accessing an Amazon S3 bucket or repointing an Elastic IP address Automation You can automate the deployment of applications onto AWSbased servers and your onpremises servers by using configuration management or orchestration software This allows you to handle application and configuration change management across both environments with ease There are several popular orchestration software options available For a list of solution providers see the AWS Partner Directory 3 AWS CloudFormation works in conjunction with several tools to provision infrastructure services in an automated way Higher levels of abstraction are also available with AWS OpsWorks or AWS Elastic Beanstalk The overall goal is to automate your instances as much as possible For more information see the Architecting for the Cloud: Best Practices whitepaper You can use Auto Scaling to ensure that your pool of instances is appropriately sized to meet the demand based on the metrics that you specify in AWS CloudWatch This means that in a DR situation as your user base starts to use the environment more the solution can scale up dynamically to meet this increased demand After the event is over and usage potentially decreases the solution can scale back down to a minimum level of servers Software Licensing and DR Ensuring that you are correctly licensed for your AWS environment is as important as licensing for any other environment A WS provides a variety of models to make licensing easier for you to manage For example “Bring Your Own Lic ense” is possible for several software components or operating systems Alternately there is a range of software for which the cost of the license is included in the hourly charge This is known as “License included ” “Bring your Own License” enables you to leverage your existing software investments during a disaster “License included” minimizes up front license costs for a DR site that doesn’t get used on a day today basis If at any stage you are in doubt about your licenses and how they apply to AWS contact your license reseller Conclusion Many options and variations for DR exist This paper highlights some of the common scenarios ranging from simple backup and restore to fault tolerant multisite solutions AWS gives you finegrained control and many building blocks to build the appropriate DR solution given your DR objectives (RTO and RPO) and budget The AWS services are available ondemand and you pay only for what you use This is a key advantage for DR where significant infrastructure is needed quickly but only in the event of a disaster This whitepaper has shown how AWS provides flexible cost effective infrastructure solutions enabling you to have a more effective DR plan 3 Solution providers can be found at http://awsamazoncom/solutions/solution providers/ ArchivedAmazon Web Services – Using AWS for Disaster Recovery October 2014 Page 22 of 22 Further Reading Amazon S3 Getting Started Guide : http://docsamazonwebservicescom/AmazonS3/latest/gsg/ Amazon EC2 Getting Started Guide : http://docsamazonwebservicescom/AWSEC2/latest/GettingStartedGuide/ AWS Partner Directory (for a list of AWS solution provider s): http://awsamazoncom/solutions/solutionproviders/ AWS Security and Compliance Center: http://awsamazoncom/security/ AWS Architecture Center: http://awsamazoncom/architecture Whitepaper: Designing FaultTolerant Applications in the AWS Cloud Other AWS technical whitepapers: http://awsamazoncom/whitepapers Document Revisions We’ve made the following changes to this whitepaper since its original publication in January 2012: Updated information about AWS regions Added information about new services: Amazon Glacier Amazon Redshift AWS OpsWorks AWS Elastic Beanstalk and Amazon DynamoDB Added information about elastic network interfaces (ENIs) Added information about various features of AWS services for DR scenarios using multiple AWS regions Added information about AWS Storage Gateway virtual tape libraries
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Australian_Privacy_Considerations
|
Using AWS in the Context of Australian Privacy Considerations July 2020 (Please see https://awsamazoncom/compliance/resources/ for the latest version of this paper) Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties repres entations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custome rs © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Considerations Relevant to Privacy and Data Protection 2 AWS Shared Responsibility Approach to Managing Clo ud Security 3 Will customer content be secure? 3 What does the shared responsibility model mean for the security of customer content? 4 Understanding security OF the cloud 4 Understanding security IN the cloud 5 AWS Regions: Where will content be stored? 6 How can customers select their Region(s)? 7 Transfer of personal information cross border 9 Who can access customer content? 9 Customer control over content 9 AWS access to customer content 10 Government rights of access 10 Privacy and Data Protection in Australia: The Privacy Act 11 Privacy breaches 21 Considerations 22 Conclusion 22 Further Reading 22 AWS Artifact 23 Document Revisions 23 Abstract This document provides information to assist customers who want to use AWS to store or process content containing personal information in the context of key privacy considerations and the Australian Privacy Act 1988 (Cth) It will help customers understand: • The way AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store c ontent and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 1 Introduction This whitepaper focuses on typical questions asked by AWS customers whe n they are considering the implications of the Australian Privacy Act on their use of AWS services to store or process content containing personal information There will also be other relevant considerations for each customer to address for example a cus tomer may need to comply with industry specific requirements and the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of privacy and data protection requirements and o n applicable laws and other requirem ents relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using the AWS services For example a customer’s content includes objects that the customer stores using Amazon Simple Storage Service files stored on an Amazon Elastic Block Store volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal information relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with us governing the use of AWS services apply to customer content Customer content does not include information that a customer provides to us in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —we ref er to this as account information and it is governed by the AWS Privacy Notice Our business changes constantly and our Privacy Notice may also change You should check our website frequently to see recent changes Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 2 Considerations Relevant to Privacy and Data Protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and contro l of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS shared responsibility model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in th e context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 3 AWS Shared Responsibility Approach to Managing Cloud Security Will customer content be secure? Moving IT infrastru cture to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other security related features The customer will generally connect to the AWS environment through services the customer acquires from third par ties (for example internet service providers) AWS does not provide these connections and they are therefore part of the customer’s area of responsibility Customers should consider the security of these connections and the security responsibilities of s uch third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown in Figure 1: Figure 1 – Shared Responsibility Model Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 4 What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – security of the cloud • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – security in the cloud While AWS manages security of the cloud security in the cloud is the responsibilit y of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available design ed to provide optimum availability while providing complete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s world class highly secure data cent ers utilize state ofthe art electronic surveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security m easures built into the core AWS cloud infrastructure and services please read our Overview of Security Processes whitepaper We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and re ports including the AWS System & Organization Control (SOC) 1 2 1 and 3 2 reports ISO 27001 3 27017 4 27018 5 and 900 16 certifications and PCI DSS 7 Attestation Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 5 of Compliance Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at https://awsamazoncom/compliance/contact More information on AWS compliance certifications reports and alignment with best practices and s tandards can be found at AWS’ compliance site Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process usin g AWS Customers also have complete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their conte nt including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by th e customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers t he customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redun dant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a syst ems level and through encryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of thirdparty security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticate d identity and access management tools security capabilities Amazon Web Services Using AWS i n the Context of Australian Privacy Considerations 6 encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control the se important factors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their c ompute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Servic e and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and execute security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execute security assessments according to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an AWS Region Customers have access to a number of AWS Regions around t he world 8 including an Asia Pacific (Sydney) Region Customers can choose to use one Region all Regions or any combination of AWS Regions Figure 2 shows AWS Region locations as at December 2019 9 Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 7 Figure 2 – AWS Global Regions AWS cu stomers choose t he AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in Australia can choos e to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Sydney) Region and store their content onshore in Australia if this is their preferred location If the customer makes this choice AWS will not move their content from Australia without the customer’s consent except as legally required Customers always retain control of which AWS Region(s) are used to store and process content AWS only stores and processes each customers’ content in the AWS Region(s) and using the s ervices chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required How can customers select their Region(s)? When using the AWS management console or in placing a request through an AW S Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services Amazon Web Services Using AWS in the Contex t of Australian Privacy Considerations 8 Figure 3 provides an example of the AWS Region selection menu presented to customers when uploading content to an AWS sto rage service or provisioning compute resources using the AWS management console Figure 3 – Selecting AWS Global Regions in the AWS Management Console Customers can also prescribe the AWS Region to be used for their compute resources by taking advantage of the Amazon Virtual Private Cloud (VPC) capability Amazon VPC lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon VPC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data center Any compute and other resources launched by the customer into the VPC will be located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific (Sydney) Region and providing a link (either a VPN or Direct Connect ) back to the customer's data center all compute resources launched into that VPC would only reside in the Asia Pacific (Sydney) Region This option can also be leveraged for other AWS Regions Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 9 Transfer of personal information cros s border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provi des customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS’ adherence to the CISPE code of conduct granular data access controls monitoring and l ogging tools encryption key management audit capability adherence to IT security standards and AWS’ C5 attestations For additional information please visit the AWS General Data Protection Regulation (GDPR) Center and see our Navigating GDPR Compliance on AWS Whitepaper When using AW S services customers may choose to transfer content containing personal information cross borde r and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/87/EU (often referred to as Model Clauses ) to AWS customers transferring cont ent containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area such as Australia With our EU Data Processing Addendum and Model Clauses AWS customers — whether established in Europe or a global company operating in the European Economic Area can continue to run their global operations using AWS in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR app lies to the customer’s processing of personal data on AWS Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment They can: • Determine w here their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage Amazon Web Services Using AWS in the Context of Austral ian Privacy Considerations 10 • Control the format structure and security of their content including whether it is masked anonymized or encry pted AWS offers customers options to implement strong encryption for their customer content in transit or at rest; and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and disposal AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a numb er of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often rai sed about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their conte nt The local laws that apply in the jurisdiction where the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may apply to them Customers should seek advice t o understand the application of relevant laws to their business and operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Non governmental or Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 11 regulatory bodies typically must use recognized internation al processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclo sure unless we are legally prohibited from doing so or there is clear indication of illegal conduct in connection with the use of AWS services For additional information please visit the Amazon Information Requests Portal online Privacy and Data Protection in Australia: The Privacy Act This part of the paper discusses aspects of the Australian Privacy Act 1988 (Cth) applying from 12 March 2014 when a number of changes took effect From 12 March 2014 the main requirements in the Privacy Act for handling personal information are set out in the Australian Privacy Princip les (APPs) The APPs impose requirements for collecting managing dealing with using disclosing and otherwise handling personal information Unlike other privacy regimes the APPs do not distinguish between a data controller who has control over perso nal information and the purposes for which it can be used and a data processor that processes information at the direction of and on behalf of a data controller The APPs do however apply in different ways to different types of entities For example the way the APP requirements apply to each organization depends on the role they play in relation to the relevant personal information Obligations vary depending on whether they collect use transfer or disclose personal information AWS appreciates that it s services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifecycle of personal information included in customer content stored or processed using AWS Services For simplicit y the guidance included in the table below assumes that in the context of the customer content stored on the AWS services the customer: • Collects personal information from its end users and determines the purpose for which the customer requires and will use the information • Has the capacity to control who can access update and use the personal information collected Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 12 • Manages the relationship with the individual about whom the personal information relates including by communicating with the individual as required to comply with any relevant disclosure and consent requirements Customers may in fact work with or rely on third parties to discharge these responsibilities but the customer rather than AWS would manage its relationships with those third parties We summaries’ in the table below some APP requirements particularly important for a customer to consider if using AWS to store personal information We also discuss aspect of the AWS Services relevant to these APPs APP Summary of APP requ irements Considerations APP 12 An entity must take such steps as are reasonable in the circumstances to implement practices procedures and systems relating to the entity's functions or activities to ensure compliance with the APPs and to enable the entity to deal with inquiries or complaints about compliance with the APPs The APPs apply differently to each party reflecting the level of control and access each party has over the personal information Customer: The APPs will impose more extensive obligations on the customer than AWS This is because the customer has control of their content and is able to communicate directly with individuals about treatment of their personal information AWS: To the extent the APPs may apply to AWS they would apply i n a more limited way As explained above the customer rather than AWS knows what type of content the customer chooses to store in AWS and the customer retains control over how their content is stored used and protected from disclosure Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 13 APP Summary of APP requ irements Considerations APP 13 16 An entity must maintain a privacy policy addressing particular matters about how the entity manages personal information and comply with requirements for making that policy available Customer: Customers are responsible for maintaining their own privacy po licy that complies with the APPs AWS: In the context of customer content AWS does not know what content is uploaded by the customer and does not control that content For this reason the AWS Privacy Notice cannot address how each customer chooses to use personal information included in their customer content However customers may provide information to AWS in connection with the creation or administration of AWS accounts The AWS Privacy Notice describes how AWS collects and uses account information that it receives Amazon Web Services Using AWS in the Context of Australian Privacy Consideratio ns 14 APP 5 Where an entity collects personal information about an individual the entity must take such steps as are reasonable in the circumstances to tell or otherwise ensure the individual is aware of certain matters Customer: The customer determines and controls when how and why it collects personal information from individuals and decides whether it will include that personal information in customer content it stores or processes using the AWS services The customer may also need to ensure it discloses the purposes for which it collects that data to the relevant data subjects obtains the information from a permitted source and that it only uses the information for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal information the customer stores on AWS and therefore the customer is able to communicate directly with them about collection and treatment of their personal i nformation The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal information Consequently the customer is responsible for meeting any APP requirement to notify individuals whose personal information the customer is storing on AWS about all relevant matters required under APP5 including if applicable about the customer's use of AWS to store that personal information AWS: AWS does not know when a customer chooses to upload to AWS content that may contain personal information Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 15 APP Summary of APP requ irements Considerations AWS also does not collect personal information from individuals whose personal information is included in content a cu stomer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only uses customer content to provide the AWS services and does not use customer content for any other purpose Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 16 APP Summary of APP requ irements Considerations APP 6 Rules about the circumstances in which an entity that collects personal information may use or disclose the personal information that it holds Customer: The customer determines and controls w hy it collects personal information what it will be used for who it can be used by and who it is disclosed to The customer must ensure it only does so for permitted purposes If the customer chooses to include personal information in customer content st ored in AWS the customer controls the format and structure of its content and how it is protected from disclosure to unauthorized parties including whether it is anonymized or encrypted The customer will know whether it uses the AWS services to store or process customer content containing personal information and therefore is best placed to inform individuals that it will use AWS as a service provider if required AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 17 APP 8 Rules about disclosing personal information to an overseas recipient and exceptions to those rules Customer: The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single Region if preferred including maintaining their content in Australia if required AWS services are structured so that a customer maintains effective control of customer content regardless of what Region they use for their content The customer should consider whether it should disclose to individuals the locations in which it stores or processes their personal information and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal information t he customer stores on AWS and therefore the customer is able to communicate directly with them about such matters AWS: AWS only stores and processes each customers’ content in the AWS Region(s) and using the services chosen by the customer and otherw ise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or copy or move content between Regions that is solely the customer’s choice and the customer w ill continue to maintain effective control of its content wherever it is stored and processed General: It is important to Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 18 APP Summary of APP requ irements Considerations highlight that an entity is only required to comply with APP 8 if there is a “disclosure” by that entity to an overseas recipient The Office of the Information Commissioner (OAIC) has said disclosure generally occurs when an entity releases personal information from its effective control The AWS service is structured so that a customer maintains effective control of customer content regardless of what AWS Region they use for their content OAIC guidance indicates that information provided to a cloud service provider subject to adequate security and strict user control may be a “use” by the customer and not a “disclosure” Accordingly using AWS services to store personal information outside Australia at the choice of the customer may be a “use” not a “disclosure” of customer content Customers should seek legal advice regarding this if they feel it may be relevant to the way they propo se to use the AWS Services Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 19 APP 10 12 Rules about protecting the integrity of personal information including its quality security and allowing access and corrections and destroying or de identifying it Customers: Customers are responsible for their content and for security in the cloud When a customer chooses to store or process content containing personal information using AWS the customer has control over the quality of that content and the customer retains access to and can correct it Thi s means that the customer must take all required steps to ensure that personal information included in customer content is accurate complete not misleading and kept up to date In addition as between the customer and AWS the customer has a relationship with the individuals whose personal information is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and the ability to correct personal information included in customer content Only the customer knows why personal information included in customer content stored on AWS was collected and only the customer knows when it is no longer necessary to retain that personal information fo r legitimate purposes The customer should delete or anonymize the personal information when no longer needed AWS: AWS is responsible for security of the underlying cloud environment AWS’s SOC 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 20 APP Summary of APP requ irements Considerations including transmission storage and processing For a complete list of all the security measures built into the core AWS cloud infrastructure and services please read our Overview of Security Processes whitepaper 1 Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS Syste ms & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS Attestation of Compliance AWS only uses customer content to provide the AWS services selected by each customer to that customer and AWS has no conta ct with the individuals whose personal information is included in content a customer stores or processes using the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumsta nces to provide such individuals with access to or the ability to correct their personal information The AWS Services provide the customer with controls to enable the Customer to delete content as described in the documentation available at http://awsamazoncom/docume ntation Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 21 Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy brea ches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is bes t placed to manage this responsibility Customers control access keys and determine who is authorized to access their AWS account AWS does not have visibility of access keys or who is and who is not authorized to log into an account Therefore the cust omer is responsible for monitoring use misuse distribution or loss of access keys The Privacy Act introduced a new Australian notifiable data breaches (NDB) scheme which came into force on February 22 2018 AWS offers two types of Australian Notifiabl e Data Breaches (ANDB) Addendum to customers who are subject to the Privacy Act and are using AWS to store and process personal information covered by the NDB scheme The ANDB Addendum address customers’ need for notification if a security event affects th eir data AWS has made both types of ANDB Addendum available online as click through agreements in AWS Artifact (the customer facing audit and compliance portal that can be accessed from the AWS management console) In AWS Artifact customers can review and activate the relevant ANDB Addendum for those AWS accounts they use to store and process personal information covered by the NDB scheme The first type the Account ANDB Addendum applies only to the specific individual account that accepts the Account ANDB Addendum The Account ANDB Addendum must be separately accepted for each AWS account that a customer requires to be covered The second type the Organizations ANDB Addendum once accepted by a master account in AWS Organizations applies to the master account and all member accounts in that AWS Organization If a customer does not need or want to take advantage of the Organiz ations ANDB Addendum they can still accept the Account ANDB Addendum for individual accounts ANDB Addendum frequently asked questions are available online at https://awsamazoncom/artifact/faq/ Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 22 Considerations This whitepaper does not discuss other Australian privacy laws aside from the Privacy Act that may also be relevant to customers including state based laws and industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which it operates the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their Australian privacy regulatory obligations should first ensure they identify and understand the requirements applying to them and seek appropriate advice Conclusion For AWS security is always ou r top priority We deliver services to millions of active customers including enterprises educational institutions and government agencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trust ed with some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has a ccess to it AWS customers can build their own secure applications and store content securely on AWS Further Reading To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at http://awsamazoncom/compliance and http://awsamazoncom/security As of the date of this document specific whitepapers about privacy and data protection considerations are also available for the following countries or regions : California European Union Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 23 Germany New Zealand Hong Kong Japan Malaysia Singapore Philippines AWS Artifact Customers can review and download reports and details about more than 2500 security controls by using AWS Artifact the automated compliance reporting portal available in the A WS Management Console The AWS Artifact portal provides on demand access to AWS’ security and compliance documents including the ANDB Addendum and certifications from accreditation bodies across geographies and compliance verticals AWS also offers traini ng to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free i nstructional videos selfpaced labs and instructor led classes Further information on AWS training is available at: http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further information on AWS certifications is available at: http://awsamazoncom/certification/ If you requ ire further information please contact AWS at: https://awsamazoncom/contact us/ or contact your local AWS account representative Document Revisions Date Description July 2020 Fifth publication Amazon Web Services Using AWS in the Context of Australian Privacy Considerations 24 Date Description May 2018 Fourth publication March 2018 Third publication February 2016 Second publication March 2014 First publication Notes 1 https://awsamazoncom/compliance/soc faqs/ 2 http://d0awsstaticcom/whitepapers/compliance/soc3_amazon_w eb_servicespdf 3 http://awsamazoncom/compliance/iso 27001 faqs/ 4 http://awsamazoncom/compliance/iso 27017 faqs/ 5 http://awsamazoncom/compliance/iso 27018 faqs/ 6 https://awsamazoncom/compliance/iso 9001 faqs/ 7 https://awsamazoncom/compliance/pci dsslevel1faqs/ 8 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements AWS China (Beijing) is also an isolated AWS Region Customers who wish to use the AWS China (Beijing) Region are required to sign up for a separate set of account credentials unique to the China (Beijing) Region 9 For a real time location map please visit: https://awsamazoncom/about aws/global infrastructure/
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Common_Privacy__Data_Protection_Considerations
|
Using AWS in the Context of Common Privacy and Data Protection Considerations First Published September 2016 Updated September 28 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Considerations relevant to privacy and data protection 2 The AWS Shared Responsibility approach to managing cloud security 3 AWS Regions: Where will content be stored? 6 How can customers select their Region(s)? 7 Transfer of personal data cross border 8 Who can access customer content? 9 Customer control over content 9 AWS access to customer content 10 Government rights of access 10 AWS policy on granting government access 10 Common privacy and data protection considerations 11 Privacy breaches 17 Considerations 17 Conclusion 17 Contributors 18 Further reading 18 Document revisions 19 Abstract This document provides information to assist customers who want to use Amazon Web Services ( AWS ) to store or process content containing personal data in the context of common privacy and data protection considerations It help s customers understand: • The way AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 1 Introduction This whitepaper focuses on typical questions asked by AWS customers when they are considering privacy and data protection requirements relevant to their use of AWS services to store or process content containing personal data There are other relevant cons iderations for each customer to address ; for example a customer may need to comply with industry specific requirements the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This white paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements differ AWS strongly encourages its customers to obtain appropriate advice on their implementa tion of privacy and data protection requirements and on applicable laws and other requirements relevant to their business The term “content ” in this white paper refers to software (including virtual machine images) data text audio video images and o ther content that a customer or any end user stores or processes using AWS For example a customer’s content includes objects that the customer stores using Amazon Simple Storage Service (Amazon S3) files stor ed on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal data relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with AWS governing the use of AWS services apply to customer content Customer content does not include data that a customer provides to AWS in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information AWS refers to this as account information and it is governed by the AWS Privacy Notice AWS changes constantly and the AWS Privacy Notice may also change Check the website frequently to see recent changes Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 2 Considerations relevant to privac y and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including con trol over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anony mized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibili ties relating to the security of that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requ irements that may apply to content that customers choose to store or process using AWS services Amazon Web Services Using AWS in the Context of Common Privacy and Dat a Protection Considerations 3 The AWS Shared Responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a shared responsibil ity model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other security related features The customer generally connect s to the AWS environment through services the customer acquires from third parties (for example internet service provide rs) AWS does not provide these connections ; they are part of the customer’s area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respe ctive roles of the customer and AWS in the shared responsibility model are shown in the following figure : The AWS Shared Responsibility Model Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 4 What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain contro l of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an onsite data center Understanding security OF the cloud AWS is responsible for managing th e security of the underlying cloud environment The AWS Cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing complete customer s egregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high l evel of security to all customers regardless of the type of content being stored or the geographical Region in which they store their content The AWS world class highly secure data centers utilize state oftheart electronic surveillance and multi factor access control systems Data centers are staffed 24 hours a day seven days a week by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see the Introduction to AWS Security whitepaper AWS is vigilant about its customers’ security and ha s implemented sophisticated technical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System and Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS Attestation of Compliance Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 5 The AWS ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer c ontent These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at AWS Artifact More information on AWS compliance certifications reports and alignment with best practices and standards can be found on the AWS Compliance site Understanding s ecurity IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they empower to access their content and services including what credentials are required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and too ls they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures are implemented in the cloud in accordance with each custome r's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks and so on to create a more resilient high availability architectur e If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a data level To assist customers in designing implementing and operating t heir own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of thirdparty security solutions Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 6 Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting co ntent and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service (AWS KMS) and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers d esign and run security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and run security assessments according to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’ s compute instances and do not violate the AWS Acceptable Use Policy For more information on penetration testing see the Penetration Testing page AWS Regions: Where will content be stored? AWS data centers are built in clusters in various Regions Each of these data center clusters in a given country is referred to an “AWS Region” Customers have access to a number of AWS Regions around the world Customers can choose to use one Region all Regions or any combination of AWS Regions The following figure shows AWS Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 7 Region locations as of August 2021 For the most current information on AWS Regions see the Global Infrastructure page AWS Regions AWS cu stomers choose the AWS Region or Regions in which their content and servers are located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in India can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Mumbai) Region and store their content onshore in India if this is their preferred location If the customer makes this choice AWS will not move their content from India without the customer’s consent except as legally required Customers always retain control of which AWS Region(s) are used to store and process content AWS stores and processes each customers’ content only in the AWS Region(s ) chosen by the customer and otherwise will not move customer content without the customer’s cons ent except as legally required How can customers select their Region(s)? When using the AWS Management Console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wants to use AWS services Amazon Web Services Using AWS in the Context of Common Privacy and D ata Protection Considerations 8 The following figure provides an example of the AWS Region selection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Selecting AWS Regions in the AWS Management Console Customers can also prescribe the AWS Region to be used for their compute resources by taking advantage of the Amazon Virtual Private Cloud (VPC) capability Amazon VPC lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon VPC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data center Any compute and other resources launched by the customer into the VPC is located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific ( Mumbai) Region and providing a link (either a VPN or Direct Connect ) back to the customer's data center all c ompute resources launched into that VPC would only reside in the Asia Pacific (Mumbai) Region This option can also be leveraged for other AWS Regions Transfer of personal data cross border In 2016 the European Commission approved and adopted the new Gen eral Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 9 provides customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS adherence to the CISPE code of conduct granular data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS C 5 attestations For additional information see the AWS General Data Protection Regulation (GDPR) Center and the Navigating GDPR Compliance on AWS whitepaper When using AWS services customers may choose to transfer content containing personal data cross border and they need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/87/EU (often referred to as “Model Clauses”) to AWS customers transfer ring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area With the AWS EU Data Processing Addendum and Model Clauses AWS customers — whether established in Europe or a global company oper ating in the European Economic Area —can continue to run their global operations using AWS in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies t o the customer’s processing of personal data on AWS Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment They can: • Determine where t heir content will be located ; for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 10 This allows AWS customers to control the entire lifecycle of their content on AWS and manage their content in accordance with their own specific needs incl uding content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS KMS ) managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdi ctions may apply to them Customers should seek advice from their advisors to understand the application of relevant laws to their business and operations AWS policy on granting government access AWS is vigilant about customers' security and does not disc lose or move data in response to a request from the US or other government unless legally required to do so to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Non governme ntal or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally AWS notifies customers where practicable before disclosin g their content so customers can seek protection from disclosure unless AWS is legally prohibited from doing so or there is clear indication of illegal conduct in connection with the use of AWS services For additional information see the Amazon Information Requests Portal online Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 11 Common privacy and data protection considerations Many countries have laws designed to protect the privacy of personal data Some countries have one comprehensive data protection law while others address data protection in a more nuanced way through a variety of laws and regulations While legal and regulatory requirements differ — including due to jurisdictional requirements industry specific requi rements and content specific requirements — there are some common considerations that arise under several leading data protection laws These can be aligned to the typical lifecycle of personal data To help customers analyze and address their privacy and data protection requirements when using AWS to store and process content containing personal data this whitepaper discuss es various stages of this data lifecycle identify key considerations relevant to each stage and provide relevant information about h ow the AWS services operate Many data protection laws allocate responsibilities regarding how a party interacts with personal data and the level of access and control they have over that personal data One common approach is to distinguish between a data controller data processor and data subject The terminology used in different jurisdictions may vary and some laws make more subtle distinctions AWS appreciates that its services are used in many different contexts for different business purposes an d that there may be multiple parties involved in the data lifecycle of personal data included in customer content stored or processed using AWS For simplicity the guidance in the following table assumes that in the context of customer content stored or processed using AWS the customer: • Collects personal data from its end users or other individuals (data subjects) and determines the purpose for which the customer requires and will use the personal data • Has the capacity to control who can access update and use the personal data • Manages the relationship with the individual about whom the personal data relates (referred to in this section as a data subject) including by communicating with the data subject as required to comply with any re levant disclosure and consent requirements As such the customer performs a role similar to that of a data controller as it controls its content and makes decisions about treatment of that content including who is Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 12 authorized to process that content on i ts behalf By comparison AWS performs a role similar to that of a data processor because AWS uses customer content only to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes without t he customer’s consent Note that the terms “data processor” and “data controller” have a very distinct meaning under EU law and this whitepaper is not intended to address specific EU requirements Where a customer processes personal data using the AWS s ervices on behalf of and according to the directions of a thirdparty (who may be the controller of the personal data or another third party with whom it has a business relationship) the customer responsibilities referenced in the following table will be shared and managed between the customer and that third party Table 1 —Data lifecycle stage summary examples and considerations Data lifecycle stage Summary and examples Considerations Collecting personal data It may be appropriate or necessary to inform individuals (data subjects) or seek their consent before collecting their personal data This may include notification about the purpose for which their information will be collected used or disclosed Customer : The customer determines and controls when how and why it collects personal data from individuals and decides whether it will include that personal data in customer content it stores or processes using the AWS services The customer may need to disclose the purposes for which it collects that data t o the relevant data subjects obtain the data from a permitted source and use the data only for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with AWS about collection and treatment of their personal data Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 13 Data lifecycle stage Summary and examples Considerations Requirements may differ depending on who personal data is collected from (for example the requirements may differ if personal data is collected from a thirdparty source instead of directly from the individual ) Collection of personal data may only be permitted if it is for a valid or reasonable purpose The customer rather than AWS also know s the scope of any notifications given to or consents obtained by the customer from such individu als relating to the collection of their personal data AWS : AWS does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is unable in the se circumstances to communicate with the relevant individuals AWS uses customer content only to provide the AWS services selected by each customer to that customer and does not use customer content for any other purposes without the cu stomer’s consent Using and disclosing personal data It may be appropriate or necessary to use or disclose personal data only for the purpose for which it was collected Customer : The customer determines and controls why it collects personal data what it will be used for who it can be used by and who it is disclosed to The customer must ensure it only does so for permitted purposes The customer will know whether it uses the AWS services to store or process customer content containing personal d ata and therefore is best placed to inform individuals that it will use AWS as a service provider if required AWS : AWS uses customer content only to provide the AWS services selected by each customer to that customer and does not use customer content f or other purposes without the customer’s consent Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 14 Data lifecycle stage Summary and examples Considerations Offshoring personal data If transferring personal data offshore it may be necessary or appropriate to inform individuals (data subjects) of the countries in which the customer will store their personal da ta and/or seek consent to store their personal data in that location It may also be important to consider the comparable protections afforded by the privacy regime in the relevant country where personal data will reside Customer : The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single Region if preferred The customer should consider whether it should disclose to individuals the locations in which it stores or processes their personal data and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about such matters AWS : AWS stores and processes each customers’ c ontent only in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or c opy or move content between Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General : AWS is ISO 27001 certified and offers robust security features to all customers regardless of the geographical Region in which they store their content Securing personal data It is important to take steps to protect the security of personal data Customer : Customers are responsible for security in the cloud including security of their content (and personal data included in their content) Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 15 Data lifecycle stage Summary and examples Considerations Examples of steps customers can take to help secure their content include implementing strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys as well as appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access AWS : AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see the Introduction to AWS Security whitepaper Customers can validate the security controls in place within the AWS environment through AWS certifications and reports in cluding the AWS System and Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS Attestation of Compliance Accessing and correcting personal data Individuals (data subjects) may have right to access their personal data including for the purposes of correcting it Customer : The customer retains control of content stored or processed using AWS including control over how that content is secured and who can access and amend that content In addition as between the customer and AWS the customer has a relationship with the individuals whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and the ability to co rrect personal data included in customer content Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 16 Data lifecycle stage Summary and examples Considerations AWS : AWS only uses customer content to provide the AWS services selected by each customer to that customer or as otherwise consented to by the customer AWS does not have a direct relationship with the individuals whose personal data is included in content a customer stores or processes using the AWS services Given this and the level of control customers enjoy over customer content AWS does not provide individuals with access to or the ability to correct their personal data Maintaining the quality of p ersonal data It may be important to ensure that personal data is accurate and that integrity of that personal data is maintained Customer : When a customer chooses to store or process content containing personal data using AWS the customer has control ov er the quality of that content and the customer retains access to and can correct it This means that the customer can keep the personal data included in customer content accurate complete not misleading and up todate AWS : The AWS SOC 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing Deleting or de identifying personal data Personal data typically should not be kept for longer than is reasonably required and otherwise should be retained in accordance with relevant data retention laws Customer : Only the customer knows why personal data included in customer content stored on AWS was collected and only the customer knows when it is no longer necessary to retain that personal data for legitimate purposes The customer should delete or anonymize the personal data when no longer needed AWS : The AWS services provide the customer with controls to enable the customer to delete content as described in the AWS Documentation Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 17 Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer can manage this responsibility For example customers control access keys and determine who is authorized to access th eir AWS account AWS does not have visibility of access keys or who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys In some jurisdictions it is m andatory to notify individuals or a regulator of unauthorized access to or disclosure of their personal data There are circumstances in which notifying individuals will be the best approach to mitigate risk even though it is not mandatory under the appli cable law The customer determine s when it is appropriate or necessary for them to notify individuals and the notification process they will follow Considerations Customers should consider the specific requirements that apply to them including any indus tryspecific requirements The relevant privacy and data protection laws and regulations applicable to individual customers depend on several factors including where a customer conducts business the industry in which they operate the type of content the y want to store where or from whom the content originates and where the content will be stored Customers concerned about their privacy regulatory obligations should first ensure they identify and understand the requirements that apply to them and seek appropriate advice Conclusion For AWS security is always top priority AWS deliver s services to millions of active customers including enterprises educational institutions and government agencies in over 190 countries AWS customers include financial services providers and healthcare providers and AWS is trusted with some of their most sensitive information Amazon Web Services Using AWS in the Context of Common Privacy a nd Data Protection Considerations 18 AWS services are designed to give customers flexibility over how they configure and deploy their solutions and how they control their content in cluding where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Contributors Contributors to this document include : • Simon Hollander AWS Legal Senior Corporate Counsel • Jonathan Hatae AWS Legal Senior Corporate Counsel Further reading To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compli ance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at : • http://awsamazoncom/compliance • http://awsamazoncom/security As of the date of this writing specific whitepapers about privacy and data protection considerations are also available for the following countries or Regions: • California • European Union • Germany • Australia • Hong Kong • Japan • Malaysia • New Zealand • Philippines Amazon Web Services Using AWS in the Context of Common Privacy and Data Protection Considerations 19 • Singapore AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS Cloud and gain proficiency with AWS services and solutions AWS offers free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at: http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applica tions using AWS technology Further information on AWS certifications is available at: http://awsamazoncom/certification/ If you require further information contact AWS at: https://awsamazoncom/contact us/ or contact your local AWS account representative Document revisions Date Description September 28 2021 Refresh ed to reflect latest information about AWS services and infrastructure May 2018 Fourth Publication February 2018 Third Publication December 2016 Second Publication September 2016 First Publication
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Hong_Kong_Privacy_Considerations
|
Using AWS in the Context of Hong Kong Privacy Considerations Published December 1 2017 Updated August 31 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates supp liers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Scope 1 Customer Content: Considerations relevant to privacy and data protection 2 AWS shared respons ibility approach to managing cloud security 3 Understanding security OF the cloud 4 Understanding security IN the cloud 5 AWS Regions: Where will content be stored? 7 How can customers select their Region(s)? 7 Transfer of personal data cross border 9 Who can access customer content? 9 Customer control over content 9 AWS access to customer content 10 Government rights of access 10 AWS po licy on granting government access 11 Privacy and Data Protection in Hong Kong: The PDPO 11 Privacy breaches 17 Other considerations 18 Closing remarks 18 Additional resources 19 Further reading 19 Document history 19 About this Guide This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of common privacy and data protection considerations and the Hong Kong Personal Data (Privacy) Ordinance (Chapte r 486 of the Laws of Hong Kong) (PDPO) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in mana ging and securing content stored on AWS services Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 1 Overview This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of common privacy and data protection considerations and the Hong Kong Personal Data (Privacy) Ordinance (Chapter 486 of the Laws of Hong Kong) (PDPO) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Scope This whitepaper focuses on typical questi ons asked by AWS customers when they are considering implications of the PDP O relevant to their use of AWS services to store or process content containing personal data There will also be other relevant considerations for each customer to address for exa mple a customer may need to comply with industry specific requirements the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This paper is provided solely for informational pu rposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirements relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user st ores or processes using AWS services For example a customer’s content includes objects that the customer stores using Amazon Simple Storage Service files stored on an Amazon Elastic Block Store volume or the contents of an Amazon DynamoDB database tabl e Such content may but will not necessarily include personal data relating to that customer its end users or third parties Customers maintain ownership and control of their content and select which AWS services can process store and host their cont ent AWS does not access or use customer content without Amazon Web Services Using AWS in the Context of Hong Kong Privacy Consider ations 2 customer consent except as necessary to comply with a law or binding order of a governmental body The terms of the AWS Customer Agreement apply to customer content Customer content does not include data that a customer provides to us in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —this is referred to as account information and it is governed by the AWS Privacy Notice Customer Content: Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional thirdparty hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by thirdparty personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS Shared Responsibility Model This model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 3 and data protection requirements that may apply to content that customers choose to store or process using AWS services AWS shared responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a Shared Responsibility Model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS se rvices operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of other security features suc h as the AWS provided security group firewall The customer will generally connect to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide these connections and they are therefore part of the customer's area of responsibility Customers should consider the security of these connections and the security responsibilities of such thirdparties in relation to their systems The respective roles of the customer and AWS in the Shared Responsibility Model are shown in the following figure: Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 4 Shared Responsibility Model What does the Shared Responsibility Model mean for the security of customer content? When evaluating the security of a cloud solution it is important for cust omers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data cente r Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing complete customer segregation It provides extremely scalable highly reli able Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 5 services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high level of security to all customers regardless of the typ e of content being stored or the geographical region in which they store their content AWS’s world class highly secure data cent ers utili ze state ofthe art electronic surveillance and multi factor access control systems Data centers are staffed 24x7 b y trained security guards and access is authori zed strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Practices for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthori zed access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS compliance reports Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to t he design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at https://pagesawscloudcom/compliance contact ushtml More information on AWS compliance certifications reports and alignment with best practices and standards can be found at AWS Compliance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not c hange customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 6 architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher a vailability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is requ ired AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such se curity features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include i mplementing: • Strong password policies enabling Multi Factor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibility for their choices and for security of the content they store or proces s using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and he lp customers design and execute security assessments of their organi zation’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execut e security assessments according to their own preferences and can request permission to Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 7 conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an “AWS Region ” Customers have access to a number of AWS Regions around the world1 Customers can choose to use one Region all Regions or any combination of AWS Regions For a list of AWS Regions and a real time location map see Global Infrastructure AWS customers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of t heir choice For example AWS customers in Singapore can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Singapore) Region and store their content onshore in Singapore if this is their preferred location If the customer makes this choice AWS will not move their content from Singapore without the customer’s consent except as legally required Customers always retain control of which AWS Region(s) are used to store and process content AWS only stores and processes each customer’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required How can customers select their Region(s)? When using the AWS Management Console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services 1 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements AWS China(Beijing) is also an isolated AWS Region Customers who wish to use the A WS China (Beijing) Region are required to sign up for a separate set of account credentials unique to the China (Beijing) Region Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 8 The following figure provides an example of the AWS Region selection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Selecting AWS Global Regions in the AWS Management Console Customers can prescribe the AWS R egion to be used for their AWS resources Amazon Virtual Private Cloud (Amazon VPC) lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Am azon VPC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data cente r Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 9 Any compute and other resources launched by the customer into the VPC will be located in the AWS Region desi gnated by the customer For example by creating a VPC in the Asia Pacific (Singapore) Region and providing a link (either a VPN or AWS Direct Connect ) back to the customer's data cente r all compute resources launched into that VPC would only reside in the Asia Pacific (Singapore) Region This option can also be leveraged for other AWS Regions Transfer of personal dat a cross border At the time of writing t he provisions of the PDPO restricting the transfer of personal data outside of Hong Kong (Section 33) are not yet in operation The Office of the Privacy Commission for Personal Data Hong Kong (“PCPD”)’s cloud comp uting information leaflet issued to advise organizations on the factors they should take into account when considering engaging in cloud computing and to explain the relevance of the PDPO to cloud computing recommends that data users should know the locations/jurisdictions where the personal data will be stored and should ensure that such data is treated with a similar level of protection as it would receive if it resides in Hong Kong Further it recommends that data subjects be made aware of the trans border arrangement with regard to how their personal data is protected Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment They can : • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonym ized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own encryption keys or use thirdparty encryption mechanisms of the ir choice • Manage other access controls such as identity access management permissions and security credentials This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own speci fic needs including content classification access control retention and deletion Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 10 AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own encryption keys or using a third party encryption mechanism of their own choice AW S does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries ar e often raised about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may apply to them Customers should se ek advice to understand the application of relevant laws to their business and operations When concerns or questions are raised about the rights of domestic or foreign governments to seek access to content stored in the cloud it is important to understan d that relevant government bodies may have rights to issue requests for such content under laws that already apply to the customer For example a company doing business in Country X could be subject to a legal request for information even if the content i s stored in Country Y Typically a government agency seeking access to the data of an entity will address any request for information directly to that entity rather than to the cloud provider Most countries have legislation that enables law enforcement a nd government security bodies to seek access to information In fact most countries have processes (including Mutual Legal Assistance Treaties) to enable the transfer of information to other countries in response to appropriate legal requests for informat ion (eg relating to criminal acts) However it is important to remember that each relevant law will contain criteria that must be satisfied in order for the relevant law enforcement body to make a valid request For example the government agency seekin g access may need to show Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 11 it has a valid reason for requiring a party to provide access to content and may need to obtain a court order or warrant Many countries have data access laws which purport to apply extraterritorially An example of a US law wi th extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is similar to laws in other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law The Patriot Act generally applies to all companies with an opera tion in the US irrespective of where they are incorporated and/or operating globally Companies headquartered or operating outside the United States which also do business in the United States may find they are subject to the Patriot Act by reason of their own business operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Non US governmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear indication of illegal conduct in connection w ith the use of AWS services For more information see the Amazon Information Requests Portal online Privacy and Data Protection in Hong Kong: The PDPO The main requirements fo r handling personal data are set out in the Data Protection Principles (DPP) of the Personal Data (Privacy) Ordinance ( PDPO ) The DPPs impose requirements for collecting managing using disclosing and otherwise handling personal data collected from individuals in Hong Kong The PDPO distinguishes between “data users” and “data processors ” “Data users” control the collection holding processing or use of personal data “Data processors” Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 12 process personal data on behalf of others and do not process data for their own purposes AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifecycle of personal information included in customer content stored or processed using AWS services For simplicity the guidance included in the table below assumes that in the context of the customer content stored on the AWS services the customer : • Acquires pe rsonal information from their end users and determines the purpose for which they require and will use the personal data • Has the capacity to control who can access update and use the personal data • Manages the relationship with the individual about whom the personal information relates including by communication with the individual as required to comply with any relevant disclosure and consent requirements Customers who work with or rely on third parties to discharge these responsibilities are solely res ponsible for managing their relationships with third parties Customers not AWS are further responsible for determining how these relationships may be subject to the PDPO We summarize certain DPP requirements that are particularly important for a custo mer to consider if using AWS to store personal data in the table below We also discuss aspects of the AWS services relevant to these requirements Amazon Web Services Using AWS in t he Context of Hong Kong Privacy Considerations 13 Data Protection Principle Summary of Data Protection Obligations Considerations Purpose and manner of collection of personal data Personal data must be collected in a lawful and fair way for a purpose directly related to a function or activity of the data user Data subjects must be notified of the purpose and the classes of persons to whom the data may b e transferred Data collected should be necessary but not excessive Customer: The customer determines and controls when how and why it collects personal data from individuals and decides whether it will include that personal data in customer content it stores or processes using AWS services The customer must ensure that personal data is collected in a lawful and fair way for a purpose directly related to a function or activity of the data user As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about collection and treatment of their personal data The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal data AWS: AWS does not collect personal data from individuals whose personal data is included in content a custom er stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS will not know the nature of the customer content used by th e customer with the AWS services AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for any other purposes except as legally required Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 14 Data Protection Principle Summary of Data Protection Obligations Considerations Accuracy and duration of retentio n of personal data Data users must take all practicable steps to ensure that personal data is accurate with regard to the purpose for which the personal data is being or will be used If the data user has reasonable grounds to believe personal data is inaccurate with regard to the purpose for which it is being or will be used the personal data should not be used for that purpose unless either those grounds cease to apply or the data is erased Personal data should not be kept for longer than is necess ary for the fulfillment of the purpose for which the data is being or will be used Customer : When a customer chooses to store or process content containing personal data using AWS the customer has control over the quality of that content and the customer retains access to and can correct it This means that the customer must take all required steps to ensure that personal data included in customer content is accurate complete not misleading and kept up todate Only the customer knows why personal data included in customer content stored on AWS was collected and how it will use the personal data and only the customer knows when it is no longer necessary to retain that personal data for legitimate purposes The customer should delete or anonymize the pe rsonal data when no longer needed AWS : The AWS SOC 1 & 2 Type 2 report s include controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing The AWS services provide the cu stomer with controls to enable the customer to delete content as described in AWS Documentation Use of personal data Personal data must be used for the purpose for which the data is collected or for a directly related purpose unless voluntary and explicit consent with a new purpose is obtained from the data subject Customer: The customer determines and controls why it collects personal data what it will be used for who it can be used by and who it is disclosed to The customer must ensure it only does so for permitted purposes AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes except as legally required Amazon Web Services Using AWS in the Context of Ho ng Kong Privacy Considerations 15 Data Protection Principle Summary of Data Protection Obligations Considerations Security of personal data Data users should take all practicable steps to ensure that personal data is protected against unauthorized or accidental access processing erasure loss or use Where data users engage data processo rs to process personal data on their behalf the data user must adopt contractual or other means to prevent unauthorized or accidental access processing erasure loss or use of the data transferred to the data processor Customer: Customers are responsi ble for their content and for security in the cloud including security of their content (and personal data included in their content) AWS: AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS cloud infrastructure and services see Best Practices for Security Identity & Compliance Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 and PCI DSS compliance reports Information to be generally available Data users should take all practicable steps to ensure that the public can ascertain its personal data policies and practices be informed of the kind of personal data the data user holds and the main purposes for whic h this data is to be used Customer: The customer is responsible for maintaining its own privacy policy that complies with the PDPO and ensuring that these matters can be ascertained by the public AWS: AWS does not know when a customer chooses to upload t o AWS content that contains personal data ; does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS ; and has no contact with those individuals AWS only uses customer content to pr ovide the AWS services selected by each customer to that customer and does not use customer content for other purposes except as legally required Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 16 Data Protection Principle Summary of Data Protection Obligations Considerations Access to personal data Data subjects must be given access to their personal data and be allowed to make c orrections if it is inaccurate Customer: The customer retains control of content stored or processed using AWS including control over how that content is secured and who can access and amend that content In addition as between the customer and AWS the customer has a relationship with the individuals whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and th e ability to correct personal data included in customer content AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and AWS has no contact with the individuals whose personal data is included in content a customer stores or processes using the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to c orrect their personal data Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 17 Data Protection Principle Summary of Data Protection Obligations Considerations Offshoring personal data If transferring personal data offshore it may be appropriate to inform individuals (data subjects) of the countries in which the customer will store their personal data and/or seek consent to store their personal data in that location It may also be important to consider the comparable protections afforded by the privacy regime in the relevant country where personal data will reside The cross border data transfer restriction in Hong Kong (Sec tion 33 of the PDPO) was passed into law in 1995 at the time the PDPO was first introduced However as of August 2021 the section has not been brought into operation Customer: The customer can choose the AWS Region or Regions in which to align to their requirements ; where their content will be located ; and can choose to deploy their AWS services exclusively in a single Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what Re gion they use for their content The customer should consider whether it should disclose to individuals the locations in which it stores or processes their personal data and obtain any required consents relating to such locations from the relevant individu als if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about such matters AWS: AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or copy or move content between regions that is solely the customer's choice and the customer will continue to maintain effective control of its content wherever it is stored and processed AWS is ISO 27001 certified and offers robust security features to all customers regardless of the geographical Region in which they store their content Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 18 Customers control access keys and determine who is authorized to access their AWS account AWS does not h ave visibility of access keys or of who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys In some jurisdictions it is mandatory to notify individua ls or a regulator of unauthorized access to or disclosure of their personal data and there may be circumstances in which notifying individuals will be the best approach in order to mitigate risk even though it is not mandatory under the applicable law It is for the customer to determine when it is appropriate or necessary for them to notify individuals and the notification process they will follow Other considerations This whitepaper does not discuss other Hong Kong laws aside from the PDPO Customers should consider the specific requirements that apply to them including any industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their privacy regulatory obligations should first ensure they identify and understand the requirements applying to them and seek appropriate advice Closing remarks At AWS security is always our top priority We deliver services to millions of active customers each month including enterpri ses educational institutions and government agencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information AWS services are designed to give cus tomers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content s ecurely on AWS Amazon Web Services Using AWS in the Context of Hong Kong Privacy Considerations 19 Additional resources To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and g uidance published on the AWS website This material can be found at https://awsamazoncom/compliance and https://awsamazoncom/security Further reading AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at https://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further more information on AWS certifications see https://awsamazoncom/certification/ If you require further information contact AWS or contact your local AWS account representative Document history Date Description August 31 2021 Reviewed for technical accuracy May 1 2018 Second publication December 1 2017 First publication
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Japan_Privacy_Considerations
|
Using AWS in the Context of Japan Privacy Considerations First Published May 2018 Updated March 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved Contents Considerations relevant to privacy and data protection 2 The AWS Shared Responsibility approach to managing cloud security 3 Understanding security OF the cloud 4 Understanding security IN the cloud 5 AWS Regions: Where will content be stored? 7 How can customers select their Region(s)? 8 Transfer of personal data crossborder 10 Who can access customer content? 11 Customer control over content 11 AWS access to customer content 11 Government rights of access 11 AWS policy on granting government access 12 Privacy and data protection in Japan: The Act on the Protection of Personal Information 13 Privacy breaches 21 Consideration 22 Conclusion 22 Further reading 23 Document Revisions 24 Notes 24 Abstract This document provides information to assist customers who want to use Amazon Web Services (AWS) to store or process content containing personal information in the context of key privacy and data protection considerations and the Act on the Protection of Personal Information (“APPI”) It helps customers understand: • The way AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 1 Introduction This whitepaper focuses on typical questions asked by AWS customers when they consider the implications of the APPI on their use of AWS services to store or process content containing personal information There are other relevant considerations for each customer to address; for example a customer may need to comply with industryspecific requirements the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This whitepaper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirements relevant to their business The term “content” in this whitepaper refers to software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS For example a customer’s content includes objects that the customer stores using Amazon Simple Storage Service (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal information relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with AWS governing the use of AWS services apply to customer content Customer content does not include information that a customer provides to AWS in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information AWS refers to this as account information and it is governed by the AWS Privacy Notice AWS changes constantly and the AWS Privacy Notice may also change Check our website frequently to see recent changes Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 2 Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloudspecific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 3 The AWS Shared Responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other securityrelated features The customer generally connects to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide these connections; they are part of the customer’s area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown in Figure 1: Figure 1 –The AWS Shared Responsibility Model Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 4 What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an onsite data center Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing complete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s worldclass highly secure data centers utilize state ofthe art electronic surveillance and multifactor access control systems Data centers are staffed 24 hours a day seven days a week by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS cloud infrastructure and services see the Introduction to AWS Security whitepaper AWS is vigilant about its customers’ security and has implemented sophisticated technical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications an d PCI DSS Attestation of Compliance The AWS ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at AWS Artifact More information on AWS compliance certifications reports and alignment with best practices and standards can be found on the Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 5 AWS Compliance site Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they empower to access their content and services including what credentials are required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures are implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks and so on to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies enabling MultiFactor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 6 system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of acce ss encryption and logging features to help customers manage their content effectively including AWS Key Management Service (AMS KMS) and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and run security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Subject to AWS policies regarding testing (see the Penetration Testing page) customers are also free to design and run security assessments according to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy For more information on penetration testing see the Penetration Testing page Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 7 AWS Regions: Where will content be stored? AWS data centers are built in clusters in various Regions Each of these data center clusters in a given country is referred to an “AWS Region” Customers have access to a number of AWS Regions around the world including an Asia Pacific (Tokyo) Region and an Asia Pacific (Osaka) Region Customers can choose to use one Region all Regions or any combination of AWS Regions Figure 2 shows AWS Region locations as of December 2021 For the most current information on AWS Regions see the Global Infrastructure page Figure 2 – AWS Regions AWS customers choose the AWS Region or Regions in which their content and servers are located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in Japan can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Tokyo) Region and store their content onshore in Japan if this is their preferred location Customers can use AWS services with the confidence that their data stays in the AWS Region that they select A small number of AWS services involve the transfer of customer data for example to develop and improve those services where customers can optout of the transfer or because transfer is an essential part of the service (such as a content delivery service) Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 8 How can customers select their Region(s)? When using the AWS management console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wants to use AWS services Figure 3 provides an example of the AWS Region selection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 9 Figure 3 Selecting AWS Regions in the AWS Management Console Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 10 Customers can also prescribe the AWS Region to be used for their compute resources by taking advantage of the Amazon Virtual Private Cloud (VPC) capability Amazon VPC lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon VPC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data center Any compute and other resources launched by the customer into the VPC is located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific (Tokyo) Region and providing a link (either a VPN or Direct Connect ) back to the customer's data center all compute resources launched into that VPC would only reside in the Asia Pacific (Tokyo) Region This option can also be leveraged for other AWS Regions Transfer of personal data crossborder In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provides customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS’ adherence to the CISPE code of conduct granular data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS C5 attestations For additional information please see the AWS General Data Protection Regulation (GDPR) Center and the Navigating GDPR Compliance on AWS whitepaper When using AWS services customers may choose to transfer content containing personal information crossborder and they need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/87/EU (often referred to as “Model Clauses”) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area With the AWS EU Data Processing Addendum and Model Clauses AWS customers— whether established in Europe or a global company operating in the European Economic Area —can continue to run their global operations using AWS in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 11 Who can access customer content? Customer control over content AWS is vigilant about your privacy and AWS provides the most flexible and secure cloud computing environment available today With AWS customers own their data customers control the data location and customers control who has access to it AWS is transparent about how AWS services process the personal information customers upload to their AWS account (customer data) and AWS provides capabilities that allow customers to encrypt delete and monitor the processing of their data Customers can: • Determine where their content will be located; for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials This allows AWS customers to control the entire lifecycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS KMS) managing their own encryption keys or using a third party encryption mechanism of their own choice AWS prohibits and AWS systems are designed to prevent remote access by AWS personnel to customer data for any purpose including service maintenance unless access is requested by the customer is required to prevent fraud and abuse or to comply with law Government rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 12 an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may apply to them Customers should seek advice from their advisors to understand the application of relevant laws to their business and operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do so to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Nongovernmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally AWS notifies customers where practicable before disclosing their content so customers can seek protection from disclosure unless AWS is legally prohibited from doing so or there is clear indication of illegal conduct in connection with the use of AWS services For additional information see the Amazon Information Requests Portal online Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 13 Privacy and data protection in Japan: The Act on the Protection of Personal Information In Japan the primary legislation dealing with data protection is the Act on the Protection of Personal Information (APPI) and its related regulations The APPI1 was most recently amended effective as of 20 December 2020 and a further amendment will come into effect on April 1 2022 In addition multiple guidelines have been issued to date by various government ministries for their respective industries as well as by the Personal Information Protection Commission (PPC) a government data protection authority APPI applies to business operators that provides goods or services in Japan and handle personal information of Japanese residents Unlike many other countries the APPI does not strictly distinguish between a data controller who has control over personal information and the purposes for which it can be used and a data processor who processes information at the direction of and on behalf of a data controller The APPI applies to all business operators (individuals and entities) that handle personal information database The APPI also distinguishes between personal information and personal data Under the APPI personal data is personal information that is organized into database Obligations on business operators vary depending on whether the business operators collect use or provide personal information or personal data AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifecycle of personal information included in customer content stored or processed using AWS services For simplicity the guidance included in the table below assumes that in the context of the customer content stored on the AWS services the customer: • Collects personal information from their end users and determines the purpose for which they require and will use the personal information • Has the capacity to control who can access update and use the personal information • Manages the relationship with the individual about whom the personal information relates including by communication with the individual as required to comply with any relevant notification and consent requirements Customers may in fact work with or rely on third parties to discharge these responsibilities but the customer rather than AWS would manage its relationships with third parties We summarize the data protection principles of the APPI in the table below We also discuss aspects of the AWS services relevant to these requirements Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 14 Data protection principle Summary of data protection obligations Considerations Collection notification and purpose of use Business operators are prohibited from using deceptive or other improper means to collect personal information Business operators must obtain the data subject’s consent when collecting sensitive information2 When collecting personal information business operators must promptly either notify the data subject or publicly announce the purpose of use of such personal information The purpose of use must be specified in as much detail as possible and any changes must be reasonable Entities must not use the personal information beyond the scope necessary to achieve the purpose of use unless they have obtained the prior consent of the data subject or are allowed to under an exemption in the APPI or other Customer : The customer determines and controls when how and why it collects personal information from individuals and decides whether it will include that personal information in customer content it stores or processes using the AWS services The customer may also need to ensure it notifies or publicly announces the purposes for which it collects that data to the relevant data subjects collects the data from a permitted source and it only uses the data for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal information the customer stores on AWS and therefore the customer is able to communicate directly with AWS about acquisition and treatment of their personal information The customer rather than AWS also knows the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal information AWS: AWS does not collect personal information from individuals whose personal information is included in content a customer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is unable in these circumstances to communicate with the relevant individuals Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 15 Data protection principle Summary of data protection obligations Considerations applicable law AWS uses customer content only to provide the AWS services selected by each customer to that customer and does not use customer content for any other purposes without the customer’s consent Maintaining the accuracy of personal data Business operators must strive to ensure personal data (personal information constituting part of a Customer : When a customer chooses to store personal information using AWS the customer has control over the quality of that personal information and the customer retains access to Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 16 Data protection principle Summary of data protection obligations Considerations personal information database) is always accurate and up to date and can correct it This means that the customer must take all required steps to ensure that the personal information is accurate complete not misleading and kept uptodate AWS: AWS’s System & Organization Control (SOC) 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing Securing personal data Business operators must take necessary and appropriate security measures for personal data Customer : Customers are responsible for security in the cloud including security of their content (and personal information included in their content) As such customers are required to take appropriate security measures for personal information stored in their customer content Examples of steps customers can take to help secure their content include implementing strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys as well as appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access AWS: AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS cloud infrastructure and services see the Introduction to AWS Security Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 17 Data protection principle Summary of data protection obligations Considerations whitepaper Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS Attestation of Compliance Transferring personal information to third parties Business operators generally must obtain consent from the data subjects to transfer their personal data to third parties unless they fall under certain exemptions Customer : The customer should consider whether it is required to obtain any consents from the relevant individuals relating to the transfer of personal information to a third party As between the customer and AWS the customer has a relationship with the individuals whose personal information is stored by the customer on AWS and therefore the customer is able to communicate directly with them about such matters AWS: AWS does not collect personal information from content that a customer stores or processes using AWS and AWS has no contact with individuals whose personal information is stored by the customer on AWS Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals to seek any required consents for transfer Storing personal information on a cloud service provider According to section 753 of 2017 Q&As (updated in 2021) issued by the PPC provision of personal data by an entity to a cloud service is not Customer : The customer determines and controls when how and why it collects personal information from individuals and decides whether it will include that personal information in customer content it stores or processes usi ng the AWS services Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 18 Data protection principle Summary of data protection obligations Considerations considered a (i) transfer requiring data subject consent or (ii) entrusting of personal data requiring monitoring unless the cloud service provider handles the personal data stored on its server The customer should consider whether it is required to take any measures under applicable privacy law in connection with the storing of personal information on a cloud service provider According to section 753 of 2017 Q&As (updated in 2021) issued by PPC storing or processing of content using AWS will not be considered a transfer or entrusting of personal data to AWS unless the customer and AWS agree that AWS will handle personal data stored in such content AWS: AWS does not collect personal information from content that a customer stores or processes using AWS and AWS has no contact with individuals whose personal information is stored by the customer on AWS Therefore AWS does not handle personal information stored on its server unless the customer and AWS agree to do so Restrictions on international transfer of personal data Business operators may only transfer personal data to a foreign country when such country has a legal system that is deemed equivalent to the Japanese system for protection of personal information or where the data is transferred to an overseas third party that undertakes adequate Customer : The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in the Asia Pacific (Tokyo) or Asia Pacific (Osaka) Region if preferred The customer should consider whether it should disclose to individuals the locations in which it stores or processes their personal information and obtain any required consents relating to such locations from the relevant individuals if necessary The Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 19 Data protection principle Summary of data protection obligations Considerations precautionary measures for the protection of personal data Otherwise business operators must obtain the data subject’s consent to perform international data transfers The amendment of APPI that will take effect in April 2022 requires business operators to provide (i) the name of the overseas third party’s country (ii) a summary of foreign data privacy regulations and (iii) precautionary measures taken by the overseas third party before obtaining the consent of the data subject Note that this rule only applies where there is a transfer to an overseas recipient The PPC has suggested in section 124 of its 2017 Q&As (updated in 2021) that storing a personal data on a server in Japan operated by a foreign cloud service provider does not constitute an customer is responsible for ensuring compliance with applicable laws including privacy laws wherever their content is located As between the customer and AWS the customer has a relationship with the individuals whose personal information the customer stores on AWS and therefore the customer is able to communicate directly with them about such matters AWS: AWS enables customers to use AWS services with the confidence that their customer data stays in the AWS Region customers select A small number of AWS services such as a content delivery service AWS is ISO/IEC 27001 certified and offers robust security features to all customers regardless of the geographical Region in which they store their content Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 20 Data protection principle Summary of data protection obligations Considerations international data transfer unless the foreign cloud service provider handles the personal data stored on its server Record keeping and confirmation of transfers of personal data to third parties Business operators must confirm and record certain information prescribed by PPC relating to inbound and outbound transfers of personal data involving third parties Customer : Customers are responsible for confirming and recording certain information prescribed by PPC relating to personal information that is received from or provided to third parties in order to ensure the traceability of such transfers of personal information AWS: AWS cannot confirm or record information relating to transfers of personal information as AWS does not know what personal information (if any) is uploaded by the customer or if the customer transfers a personal information to a third party Disclosure relating to retained personal data Business operators handling retained personal data must make appropriate disclosures regarding how they handle retained personal information normally in a privacy notice For example the following information available to data subjects for the purposes of handling complaints: (i) the business operator’s name; (ii) the purpose of use of retained personal information; (iii) the Customer : The customer is responsible for meeting these disclosure requirements to individuals whose personal information the customer is storing on AWS AWS: AWS does not know when a customer chooses to upload content to AWS that may contain personal information AWS also does not acquire personal information from individuals whose personal information is stored in AWS by AWS customers AWS is unable in these circumstances to provide any required information to the relevant individuals Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 21 Data protection principle Summary of data protection obligations Considerations procedures a data subject may use to request the entity to disclose correct and discontinue using the personal information it possesses; and (iv) the business operator’s contact information Disclosure correction and deletion Business operators must disclose retained personal data to data subjects upon their request Business operators must correct incorrect retained personal data if a data subject makes such a demand for correction Business operators may be required to discontinue use of retained personal data if they are found to have violated the purpose of use Customers: When a customer chooses to store content containing retained personal information using AWS the customer has control over the content and retains access and can correct or discontinue use of such retained personal information This means that the customer must take all required steps to ensure that the personal information included in customer content is accurate complete not misleading and kept up to date AWS: AWS does not know what type of content the customer chooses to store in AWS and the customer retains control over how their content is stored used and protected from disclosure The AWS Services provide the customer with controls to enable the Customer to delete content as described in the AWS Documentation Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 22 regulators and affected individuals as required under applicable law Only the customer can to manage this responsibility For example customers control access keys and determine who is authorized to access their AWS account AWS does not have visibility of access keys or who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys The amendment of APPI that will take effect in April 2022 requires business operators to notify individuals in the event of certain material unauthorized access or disclosure of personal information In some jurisdictions it is mandatory to notify individuals or a regulator of unauthorized access to or disclosure of their personal information There are circumstances in which notifying individuals will be the best approach to mitigate risk even if not mandatory The customer determines when it is appropriate or necessary for them to notify individuals and the notification process they will follow Consideration This white paper does not discuss other Japanese privacy laws aside from the APPI that may also be relevant to customers including prefectural ordinances and industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers depend on several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their Japanese privacy regulatory obligations should first ensure they identify and understand the requirements that apply to them and seek appropriate advice Conclusion For AWS security is always top priority AWS delivers services to millions of active customers including enterprises educational institutions and government agencies in over 190 countries AWS customers include financial services providers and healthcare providers and AWS is trusted with some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions and how they control their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 23 Further reading To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at: http://awsamazoncom/compliance http://awsamazoncom/security As of the date of this writing specific whitepapers about privacy and data protection considerations are also available for the following countries or Regions: Common Consideration California European Union Germany Australia Hong Kong Malaysia New Zealand Philippines Singapore South Africa AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS Cloud and gain proficiency with AWS services and solutions AWS offers free instructional videos selfpaced labs and instructorled classes Further information on AWS training is available at: http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloudbased applications using AWS technology Further information on AWS certifications is available at: http://awsamazoncom/certification/ If you require further information please contact AWS at: https://awsamazoncom/contact us/ or contact your local AWS account representative Amazon Web Services Using AWS in the Context of Japan Privacy Considerations 24 Document Revisions Date Description December 2017 First publication May 2018 Second publication February 2022 Third publication Notes 1 The original text is available at: elawsegovgojp English translation of 2015 version of APPI is available at: Japanese law translation project 2 Under APPI sensitive information is personal information containing descriptions requiring special consideration in handling so as to avoid any unfair discrimination prejudice or other disadvantage to an individual based on a person's race belief social status medical history criminal records or the fact that a person has suffered damage through a criminal offense etc
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Malaysian_Privacy_Considerations
|
Using AWS in the Context of Malaysian Privacy Considerations Published April 2014 Updated December 22 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assura nces from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Scope 1 Customer Content: Considerations relevant to privacy and data protection 2 AWS shared responsibility approach to managing cloud security 2 Understanding security OF the cloud 4 Understanding security IN the cloud 4 AWS Regions: Where will content be stored? 5 Selecting AWS Global Regions in the AWS Management Console 6 Transfer of personal data cross border 7 Who can access customer content? 8 Customer control over content 8 AWS access to customer content 8 Government rights of access 8 AWS policy on granting government access 9 Privacy and Data Protection in Malaysia: The PDPA 10 Privacy breaches 14 Other considerat ions 15 Closing remarks 15 Additional resources 15 Further reading 15 Document history 16 Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 1 Overview This doc ument provides information to assist customers who want to use AWS to store or process content containing personal data in the context of key Malaysia privacy considerations and the Personal Data Protection Act 2010 (“ PDPA ”) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Scope This whitepaper focuses on typical questions asked by AWS customers when they are considering the implications of the PDPA on their use of AWS services to store or process content cont aining personal data There will also be other relevant considerations for each customer to address for example a customer may need to comply with industry specific requirements the laws of other jurisdictions where that customer conducts business or c ontractual commitments a customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages cu stomers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirements relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS services For example a customer’s can content include objects that the customer stores using Amazon Simple Storage Servic e (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal data relating to that customer its end users or third par ties The terms of the AWS Customer Agreement or any other relevant agreement with us governing the use of AWS services apply to customer content Customer content does not include data that a customer provides to us in connection with the creation or ad ministration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —we refer to this as account information and it is governed by the AWS Privacy Notice Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 2 Customer Content: Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these ? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities rel ating to the security of that content as part of the AWS Shared Responsibility Model This model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services AWS shared responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 3 down to the physical security of the facilitie s in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWSprovided security group firewall and other security related features The customer will generally connect to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide the se connections and they are therefore part of the customer's area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown below: Figure 1: Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When eva luating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content Amazon Web Services Using AWS in t he Context of Malaysian Privacy Considerations 4 applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible f or managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing compl ete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer th e same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s world class highly secure data centers utilize state ofthe art electronic surveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure an d services see Best Practices for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated technical and physical me asures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports These include the AWS System & Organiza tion Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS compliance reports Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at https://pagesawscloudcom/compliance contact ushtml More information on AWS compliance certifications reports and alignment with best practices and standards can be found at AWS Comp liance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they em power to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 5 solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a data level To assist customers in designing imple menting and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies enabling Multi Factor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibil ity for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service and AWS CloudTrail To assist customers in inte grating AWS security controls into their existing control frameworks and help customers design and execute security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execute security assessments according to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an “AWS Region ” Amazon Web Services Using AWS in the Context of Ma laysian Privacy Considerations 6 Customers have access to a number of AWS Regions around the world 1 Customers can choose to use one Region all Regions or a ny combination of AWS Regions For a list of AWS Regions and a real time location map see Global Infrastructure AWS customers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in Malaysia can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Singapore) Region and store their content in Singapore if this is their preferred location If the customer makes this choice AWS will not move their content from Singapore without the customer’s consent except as l egally required Customers always retain control of which AWS Region(s) are used to store and process content AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will no t move customer content without the customer’s consent except as legally required Selecting AWS Global Regions in the AWS Management Console The AWS Management Console gives customers secure login using their AWS or IAM account credentials When using th e AWS management console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services 1 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 7 The figure below provides an example of the AWS Region sel ection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS management console Any compute and other resources launched by the customer will be located in the AWS Region designated by the customer For example when customer chooses the Asia Pacific (Singapore) Region for its compute resources such as Amazon EC2 or AWS Lambda launched in that environment would only reside in the Asia Pacific (Singapore) Region This option can also be leveraged for other AWS Regions Transfer of personal data cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provides customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS’ adherence to the CISPE code of conduct granula r data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS’ s C5 attestations For additional information see the AWS General Data Protection Regulation (GDPR) Center and see the Navigating GDPR Compliance on AWS whitepaper When using AWS services customers may choose to transfer content containing personal data cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/87/EU (oft en referred to as “Model Clauses”) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area (EEA) such as Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 8 Singapore With our EU Data Processing Addendum and Model Clauses AWS customers —whether established in Europe or a global company operating in the European Economic Area —can continue to run their global operations using AWS in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective c ontrol over their content within the AWS environment They can: • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and sec urity of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own e ncryption keys or use third party encryption mechanisms of their choice • Manage identity and access management controls to their content such as by using AWS Identity and Access Management (IAM) and by setting appropriate permissions and security credenti als to access their AWS environment and content This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign governmen t agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 9 The local laws that apply in the jurisdiction wher e the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may apply to them Customers should seek advice to understand the application of relevant laws to their business and operations When concerns or questions are raised about the rights of domestic or foreign governments to seek access to content stored in the cloud it is important to understand that relevant government bodies may have rights to issue reques ts for such content under laws that already apply to the customer For example a company doing business in Country X could be subject to a legal request for information even if the content is stored in Country Y Typically a government agency seeking acc ess to the data of an entity will address any request for information directly to that entity rather than to the cloud provider Most countries have legislation that enables law enforcement and government security bodies to seek access to information In f act most countries have processes (including Mutual Legal Assistance Treaties) to enable the transfer of information to other countries in response to appropriate legal requests for information (eg relating to criminal acts) However it is important to remember that each relevant law will contain criteria that must be satisfied in order for the relevant law enforcement body to make a valid request For example the government agency seeking access may need to show it has a valid reason for requiring a p arty to provide access to content and may need to obtain a court order or warrant Many countries have data access laws which purport to apply extraterritorially An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is similar to laws in other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law including for example that the request is related to legitimate investigations The Patriot Act generally applies to all compan ies with an operation in the US irrespective of where they are incorporated and/or operating globally and irrespective of whether the information is stored in the cloud in an on site data center or in physical records This means that companies headqua rtered or operating outside the United States which also do business in the United States may find they are subject to the Patriot Act by reason of their own business operations AWS policy on granting government access AWS is vigilant about customers' s ecurity and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise requir ed by applicable law Non US governmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 10 indication of illegal conduct in connection with the use of AWS services For a dditional information see the Amazon Information Requests Portal online Privacy and Data Protection in Malaysia: The PDPA This part of the paper discusses aspects of the PDPA relating to data protection The PDPA contains several data protection principles (“Data Protection Principles”) which impose requirements for collecting managing dealing with using disclosing and otherwise handling personal data The PDPA makes a distinction between a “data user ” who processes any personal data or has control or authorizes the processing of any personal data and a “data processor ” who processes personal data solely on behal f of the data user and does not process the personal data for any of its own purposes AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifec ycle of personal data included in customer content stored or processed using AWS services For simplicity the guidance in the table below assumes that in the context of customer content stored or processed using AWS services the customer: • Collects perso nal data from its end users or other individuals (data subjects) and determines the purpose for which the customer requires and will use the personal data • Has the capacity to control who can access update and use the personal data • Manages the relationshi p with the individual about whom the personal data relates including by communicating with the data subject as required to comply with any relevant disclosure and consent requirements Customers may in fact work with (or rely on) third parties to dischar ge these responsibilities but the customer rather than AWS would manage its relationships with those third parties We summarize the key requirements of the Data Protection Principles in the table below We also discuss aspects of the AWS services relev ant to these requirements Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 11 Data Protection Principle Summary of Data Protection Obligations Considerations General Principle and Notice and Choice Principle Personal data can only be processed once the data subject has given his/her consent Data users should inform the data subject of the purposes for which their personal data is being collected and processed Customer: The customer determines and controls when how and why it collects personal data from individuals and decides whether it will include t hat personal data in customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects that data to the relevant individuals ; obtains the data from a permitted source ; and that it only uses the data for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about collection and treatment of their personal data The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection use or disclosure of their personal data The customer will know whether it uses AWS services to store or process customer content containing personal data and therefore is best placed to inform individuals that it will use AWS as a service provider if required AWS: AWS does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for any other purposes Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 12 Data Protection Principle Summary of Data Protection Obligations Considerations Disclosure Principle Personal data should only be disclose d with consent and only for the purposes disclosed to the data subject Customer : The customer determines and controls why it collects personal data what it will be used for who it can be used by and who it is disclosed to The customer should ensure it only does so for permitted purposes If the customer chooses to include personal data in customer content stored in AWS the customer controls the format and structure of its content and how it is protected from disclosure to unauthorized parties including whether it is anonymized or encrypted The customer will know whether it uses the AWS services to store or process customer content containing personal data and therefore is best placed to inform individuals that it will use AWS as a service pr ovider if required AWS : AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes Security Principle A data user should take practical steps to protect personal data from loss misuse modification unauthorized or accidental access or disclosure alteration or destruction Customer: Customers are responsible for security in the cloud including security of their content (and personal data included in the ir content) AWS: AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS cloud infrastructure and services see Best Practices for Security Identity & Compliance Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 and PCI DSS compliance reports Retention Principle Personal data should not be kept longer than necessary for the fulfilment of the purpose for which the personal data was collected Customer: Only the customer knows why personal data included in customer co ntent stored or processed using AWS services was collected and only the customer knows when it is for relevant business purposes The customer should delete or destroy the personal data when no longer needed AWS: AWS services provide the customer with co ntrols to enable the customer to delete content as described in AWS Documentation Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 13 Data Protection Principle Summary of Data Protection Obligations Considerations Data Integrity Principle The data user should take all reasonable steps to ensure that personal data is accurate complete not misleading and kept up todate having regard to the purpose for which the personal data was collected Customer: When a customer chooses to store or process content containing personal data using AWS services the customer has control ove r the quality of that content and the customer retains access to and can correct it This means that the customer should take all required steps to ensure that personal data included in customer content is accurate complete not misleading and kept uptodate AWS: AWS’s SOC 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing Offshoring Principle A data user should not transfer personal data to a place outside Malaysia other than such place as specified by the Minister unless an exception applies Customer: The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what Region they use for their content The customer should disclose to individuals the locations in which it stores or processes their personal data and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal dat a the customer stores or processes using AWS services and therefore the customer is able to communicate directly with them about such matters AWS: AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen b y the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or copy or move content between Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General: AWS is ISO 27001 certified and offers robust security feat ures to all customers regardless of the geographical Region in which they store their content Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 14 Data Protection Principle Summary of Data Protection Obligations Considerations Access Principle A data user should provide a data subject access to their personal data and they should be able to correct their personal data Customer: The customer retains control of content stored or processed using AWS services including control over how that content is secured and who can access and amend that content In addition as between the customer and AWS the customer has a relationship with the individuals whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and the ability to correct personal data i ncluded in customer content AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and AWS has no contact with the individuals whose personal data is included in content a customer stores or processes u sing the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal data Data Us er Registration The PDPA makes it a requirement for specified classes of data users to register with the Personal Data Protection Commissioner as data users Customer: The c ustomer should determine whether it falls within any of the specified classes of data users that are required to register AWS: AWS does not fall within any of the specified classes of data users that are required to be registered Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Customers control access keys and determine who is authorised to access their AWS account AWS does not have visibility of access keys or of who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 15 In some jurisdictions it is mandatory to notify individuals or a regulator of unauthorized access to or disclosure of their personal data and there may be circumstances in which notifying individuals is the best approach in order to mitigate risk even though it is not mandatory under the applicable law It is for the customer to determine when it is appropriate or necessary for them to notify individuals and the notification process they will follow Other considerations This whitepaper does not discuss specific privacy or data protection laws other than the PDPA Customers should consider the speci fic requirements that apply to them including any industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their privacy regulatory obligations should first ensure they identify a nd understand the requirements applying to them and seek appropriate advice Closing remarks At AWS security is always our top priority We deliver services to millions of active customers including enterprises educational institutions and government a gencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Additional resources To help c ustomers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This mate rial can be found at https://awsamazoncom/compliance and https://awsamazoncom/security Further reading AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at http s://awsamazoncom/training/ Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 16 AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further i nformation on AWS certifications is available at http s://awsamazoncom/certification/ If you require further information contact AWS at https://aws amazoncom/contact us/ or contact your local AWS account representative Document history Date Description December 2021 Reviewed for technical accuracy May 2018 Fourth publication April 2018 Third publication January 2016 Second publication April 2014 First publication
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_NCSC_UKs_Cloud_Security_Principles
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Using AWS in the c ontex t of NCSC UK’s Cloud Security Principles October 2016 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 2 of 47 Table of Contents Abstract 3 Scope3 Considerations for public sector organisations 3 Shared Responsibility Environment 4 Implementing Cloud Security Principles in AWS 6 Principle 1: Data in transit protection 6 Principle 2: Asset protection and resilience 8 Principle 3: Separation between consumers 19 Principle 4: Governance framework 21 Principle 5: Operational securi ty 23 Principle 6: Personnel security 29 Principle 7: Secure development 30 Principle 8: Supply chain security 31 Principle 9: Secure consumer management 32 Principle 10: Identity and authentication 36 Principle 11: External interface protection 38 Principle 12: Secure service administration 40 Principle 13: Audit information provision to consumers 42 P rinciple 14: Secure use of the service by the consumer 43 Conclusion 45 Additional Resources 45 Document Revisions 46 Appendix – AWS Platform Benefits 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 3 of 47 Abstract This whitepaper is intended to assist organisations using Amazon Web Services (AWS) for United Kingdom (UK) OFFICI AL classified workloads in alignment with National Cyber Security Centre ’s (NCSC) Cloud Security Principles published under the Cloud Security Guidance This document aims to help the reader understand: • How AWS implements security processes and provides assurance over those processes for each of the Cloud Security Principles • The role that the customer and AWS play in managing and securing content stored on AWS • The way AWS services operate including how customers can address security and risk management using AWS cloud services Scope This whitepaper is based around typical questions asked by AWS customers when considering the implications of handling OFFICIAL information in relation to NCSC Cloud Security Principles Our intention is to provide you with guidance that you can use to make an informed decision when performing risk assessments to help address common security requirements This whitepaper is not legal advice for your specific use of AWS; we strongly encourage you to obtain appropriate compl iance advice about your specific data privacy and security requirements as well as applicable laws relevant to your projects and datasets Considerations for public sector organisations NCSC published the Cloud Security Guidance documents for public sector organisations that are considering the use of cloud services for handling OFFICIAL information on 23 April 2014 Under this guidance HM Government information assets are currently classified into three categories: OFFICIAL SECRET and TOP SECRET Each information asset classification attracts a baseline set of security controls providing appropriate protection against typical threats NCSC C loud Security Guidance includes a risk management approach to using cloud services a summary of the Cloud Securit y Principles and guidance on implementation of the Cloud Security Principles Additionally supporting guidance documents are included on recognised standards and definitions separation requirements for cloud services and specific guidance on the measures that customers of Infrastructure as a Service (IaaS) offerings should consider taking This whitepaper provides guidance on how AWS aligns with Cloud Security Principles and the objectives of these principles as part of NCSC ’s Cloud Security Guidance The legacy Impact Level accreditation scheme has been phased out and is no longer the mechanism used to describe the security properties of a system including cloud services Public sector This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 4 of 47 organisations are ultimately responsible for risk management dec isions relating to the use of cloud services GovUK Digital Marketplace Amazon Web Services currently provide the services listed on our UK G Cloud page on the UK Government Digital Marketplac e When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: • What content they choose to store on AWS • Which AWS services are used with the content • In what country that content is stored • The format and structure of that content and whether it is masked anonymised or encrypted • Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility ” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Shared Responsibility Environment Using AWS creates a shared responsibility model between customers and AWS AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of the guest operating system This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 5 of 47 (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as hostbased firewalls hostbased intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and to validate that controls are operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 6 of 47 Data in Transit Protection Consumer data transiting networks should be adequately protected again st tampering (integrity) and eavesdropping (confidentiality) This should be achieved via a combination of: •Network protection (denying your attacker access to intercept data) •Encryption (denying your attacker the ability to read data) Implementation objectives Consumers should be sufficiently confident that: •Data in transit is protected between the consumer’s end user device and the service •Data in transit is protected internally within the service •Data in transit is protected between the service and other services (eg where Application Programming Interfaces (APIs) are exposed) https://wwwgovuk/government/publications/i mpleme nting thecloud security principles/implementing thecloud secu rity principles#principle1 datain transit protection Implementing Cloud Security Principles in AWS The Cloud Security Guidance published by NCSC lists 14 essential principles to consider when evaluating cloud services and why these may be important to the public sector organisation Cloud service users should decide which of the principles are important and how much (if any) assurance the users require in the implementation of these principles The 14 Cloud Security Principles their objectives and how AWS services can be used to implement these objectives are described with the related assurance approach Principle 1: Data in transit protection Implementation approach AWS uses various technologies to enable data in transit protection between the consumer and a service within each service and between the services Cloud infrastructure and applications often communicate over public links such as the Internet so it is impo rtant to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers Further information on enabling network security for data protection is provided in the next section AWS Network Protection The AWS network provides protection against network attacks Procedures and mechanisms are in place to appropriately restrict unauthorized internal and external access to data and access to customer data is appropriately segregated from other customers Examples in clude: Distributed Denial of Service (DDoS) Attacks: AWS API endpoints are hosted on large Internet scale infrastructure and use proprietary This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 7 of 47 DDoS mitigation techniques Additionally AWS networks are multi homed across a number of providers to achieve Internet access diversity Man in the Middle (MITM) Attacks: All of the AWS APIs are available via Secure Sockets Layer (SSL) protected endpoints which provide server authentication Amazon EC2 Amazon Machine Images (AMIs) automatically generate new Secure Shell (SSH) host keys on first boot and log them to the instance’s console Customers can then use the secure APIs to call the console and access the host keys before logging into the instance for the first time Customers can use SSL for all of their interactions with AWS Internet Protocol (IP) Spoofing: The AWS controlled hostbased firewall infrastructure will not permit an instance to send traffic with a source IP or Media Access Control ( MAC) address other than its own Port Scanning: Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy Violations of the AWS Acceptable Use Policy are taken seriously and every reported violation is investigated Customers can report suspected abuse via the contacts available on our website at: http://aws portalamazoncom/gp/aws/html forms controller/contactus/AWSAbuse When unauthorized port scanning is detected by AWS it is stopped and block ed Port scans of Amazon EC2 instances are generally ineffective because by default all inbound ports on Amazon EC2 instances are closed and are only opened by the customer Customers’ strict management of security groups can further mitigate the threat of port scans Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form Customer Network Protection Virtual Private Cloud (VPC) : A VPC is an isolated portion of the AWS cloud within which customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from another Amazon EC2 instances within a VPC are only accessible by a customer via an IPsec Virtual Private Network (VPN) connection that is established to the VPC IPsec VPN: an IPsec VPN connection connects a customer’s VPC to another network designated by the customer IPsec is a protocol suite for securing IP communications by authenticating and encrypting each IP packet of a data stream Amazon VPC customers can create an IPsec VPN connection to their VPC by first establishing an Internet Key Exchange (IKE) security association between their Amazon VPC VPN gateway and another network gateway using a pre shared key as the authenticator Upon establishment IKE negotiates an ephemeral key to secure future IKE messages An IKE security association cannot be established unless there is complete agreement among the parameters including SHA1 authentication and AES 128bit encryption Next using the IKE ephemeral key keys are established between the VPN gateway and customer gateway to form an IPsec security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 8 of 47 Asset protection and resilience Consumer data and the assets storing or processing it should be protected against physical tampering loss damage or seizure https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience association Traffic between gateways is encrypted and decrypted using this security association IKE automatically rotates the ephemeral keys used to encrypt traffic within the IPsec security association on a regular basis to ensure confidentiality of communications API: Amazon VPC API calls are part of the Amazon EC2 WSDL All API calls to create and delete VPCs subnets VPN gateways and IPsec VPN connections are all signed using an X509 certificate and an associated private key or the customer’s AWS Secret Access Key Without access to the customer’s Secret Access Key or X509 certificate Amazon EC2 API calls cannot be successfully made with that customer’s key pair In addition API calls can be encrypted with SSL to maintain confidentiality AWS Encryption (Data in transit) AWS supports both IPsec and SSL/TLS for protection of data in transit IPsec is a protocol that extends the IP protocol stack often in network infrastructure and allows applications on upper layers to communicate securely without modification SSL/TLS on the other hand operates at the session layer and while there are thirdparty SSL/TLS wrappers it often requires support at the application layer as well For further details on AWS service specific data in transit security please refer to the AWS Security Best Practices whitepaper Assurance approach The data in transit protection principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications among others are recognised by the European Union Agency for Network and Information Security (ENISA) under the Cloud Certification Schemes The controls in relation to data in transit protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 2: Asset protection and resilience Implementation approach The AWS cloud is a globally available p latform in which you can choose the geographic region in which your data is located AWS data centers are built in clusters in various global regions AWS calls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 9 of 47 21 Physical location and legal jurisdiction The locations at which consumer data is stored processed and managed from must be identified so that organisations can understand the legal circumstances in which their data could be accessed without their consent Public sector organisations will also need to understand how data handling controls within the service are enforced relative to UK legislation Inappropriate protection of consumer data could result in legal and regulatory sanction or reputational damage Implementation objectives Consumers should understand: •What countries their data will be stored processed and managed from and how this affects their compliance with relevant legislation •Whether the legal jurisdiction(s) that the service provider operates within are acceptable to them https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience these data center clusters Availability zones (AZs) As of October 2016 AWS maintains 38 AZs organized into 14 regions globally As an AWS customer you are responsible for carefully selecting the Availability Zones where your systems will reside You can choose to use one region all regions or any combination of regions using builtin features available within the AWS Management Console AWS regions and Availability Zones ensure that if you have location specific requirements or regional data privacy policies you can estab lish and maintain your private AWS environment in the appropriate location You can choose to replicate and back up content in more than one region; AWS does not move customer data outside the region(s) you configure Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids They are interconnected using high speed links so applications can rely on Local Area Network (LAN) connectivity for communication between Availability Zones within the same region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 10 of 47 On March 6 2015 the AWS data processing addendum including the Model Clauses was approved by the group of EU data protection authorities known as the Article 29 Working Party This approval means that any AWS customer who requires the Model Clauses can now rely on the AWS data processing addendum as providing sufficient contractual commitments to enable international data flows in accordance with the Directive For more detail on the approval from the Article 29 Working Party please visit the Luxembourg Data Protection Authority webpage here: http://wwwcnpdpubliclu/en/actualites/international/2015/03/AWS/indexhtml AWS complies with Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Most countries have data access laws which purport to have extraterritorial application An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is not dissimilar to laws in many other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law including for example that the request is related to legitimate investigations Assurance approach The legal jurisdiction subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and AICPA SOC 1 SOC 2 SOC 3 certification programs These certifications are recognised by the European Union Agency for Network and Information Security (ENISA) under the Cloud Certification Schemes The controls in relation to legal jurisdic tion are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements The p hysical location subprinciple and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical loca tion do not exist within the existing certification programs for them to be validated independently Our ISO 27001:2013 and ISO 9001:2008 certifications list all the locations in scope of the independent annual audits AWS uses Service Provider Asse rtion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 11 of 47 22 Data centre security The locations used to provide cloud services need physical protection against unauthorised access tampering theft or reconfiguration of systems Inadequate protections may result in the disclosure alteration or loss of data Implementation objectives Consumers should be confident that the physical security measures employed by the provider are sufficient for their intended use of the service https://wwwgovuk/government/publications/i mplementing thecloud security principles/implementing thecloud security principles#principle2 asset protection and resilience 22 Data centre security Implementation approach Amazon has significant experience in securing designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS provides data center physical access to approved employees and contractors who have a legitimate business need for such privileges All individuals are required to present identification and are signed in Visitors are escorted by authorised staff When an employee or contractor no longer requires these privileges his or her access is promptly revoked even if he or she continues to be an employee of Amazon or AWS In addition access is automatically revoked when an employee’s record is terminated in Amazon’s HR system Cardholder access to data centers is reviewed quarterly Cardholders marked for removal have their access revoked as part of the quarterly review Physical access is controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff utilises multi factor authentication mechanisms to access data center floors Assurance approach The data centre security subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data centre security are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 12 of 47 23 Data at rest protection Consumer data should be protected when stored on any type of media or storage within a service to ensure that it is not accessible by local unauthorised parties Without appropriate measures in place data may be inadvertently disclosed on discarded lost or stolen media Implementation objectives Consumers should have sufficient confidence that storage media containing their data is protected from unauthorised access https://wwwgovuk/government/publications/imple menting thecloud security principles/im plementing thecloud security principles#principle 2asset protection and resilience 23 Data at rest protection Implementation approach As AWS customers you have access to various security and data protection features that allows sufficient confidence that data at rest is protected from unauthorised access One of the widely used methods to protect data at rest in storage media is encryption Within AWS there are several options for encrypting data ranging from completely automated AWS encryption solutions (server side) to manual client side options Your decision to use a particular encryption model may be based on a variety of factors including the AWS service(s) being used your institutional policies regulatory and business complian ce requirements your technical capability specific requirements of the data use certificate and other factors There are three different models for how you and/or AWS provide the encryption method and work with the key management infrastructure (KMI) as illustrated in the diagram below Customer Managed AWS Managed Model A Customer manages the encryption method and entire KMI Model B Customer manages the encryption method; AWS provides storage component of KMI while custo mer provides management layer of KMI Model C AWS manages the encryption method and the entire KMI Encryption Method Encryption Method Encryption Method Key Storage Key Management Key Storage Key Management Key Storage Key Management This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 13 of 47 In addition to the client side and server side encryption features built into many AWS services another common way to protect keys in a KMI is to use dedicated storage and data processing devices that perform cryptographic operations using keys on the devices These devices called hardware security modules (HSMs) typically provide tamper evidence or resistance to protect keys from unauthorized use For customers who choose to use AWS encryption capabilities for controlled datasets the AWS CloudHSM service is another encryptio n option within your AWS environment giving you use of HSM s that are designed and validated to US government standards (NIST FIPS 1402) for secure key management If you want to manage the keys that control encryption of data in AWS services but don’t want to manage the required KMI resources either within or external to AWS you can leverage the AWS Key Management Service (KMS) AWS Key Management Service is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and it uses HSMs to protect the security of your keys AWS Key Management Service is integrated with other AWS services to help meet your regulatory and compliance needs AWS KMS and other AWS services not listed on Digital Marketplace are available through our partner network AWS KMS also allows you to implement key creation rotation and usage policies AWS KMS is designed so that access to your master keys is restricted The service is built on systems that are designed to protect your master keys with extensive hardening techniques such as never storing plaintext master keys on disk not persisting them in memory and limiting which systems can connect to the device All access to update software on the service is controlled by a multi level approval process that is audited and reviewed by an independent group within Amazon For more information about encryption options within the AWS environment see Secu ring Data at Rest with Encryption as well as the AWS CloudHSM product details page To learn more about how AWS KMS works you can read the AWS Key Management Service Whitepaper To learn more about specific data at rest protection features in Amazon S3 Amazon EBS Amazon RDS and Amazon Glacier please refer to the AWS Security Best Practices Whitepaper For the implementation approach towards physical security controls to secure data at rest please refer to the details described in Data Centre Security (Section 22) of this document Assurance approach The data at rest protection subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data at rest protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 14 of 47 24Data sanitisation The process of provisioning migrating and deprovisioning resources should not result in unauthorised access to consumer data Inadequate sanitisation of data could result in: •Consumer data being retained by the service provider indefinitely •Consumer data being accessible to other consumers of the service as resources are reused •Consumer data being lost or disclosed on discarded lost or stolen media Implementation objectives Consumers should be sufficiently confident that: •Their data is erased when resources are moved or re provisioned when they leave the service or when they request it to be erased •Storage media which has held consumer data is sanitised or securely destroyed at the end of its life https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 24 Data sanitisation Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence AWS uses techniques described in industry accepted standards to ensure that data is erased when resources are moved or reprovisioned when they leave the service or when you request it to be erased AWS Data Erasure Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse as a mandatory process before re provisioning If you have procedures requiring that all data be wiped via a specific method such as those detailed in DoD 522022M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) you have the ability to do so on Amazon EBS You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements Similarly when deletion is requested for Amazo n RDS database instance the database instance is marked for deletion An Amazon RDS automation sweeper deletes the instance from the Amazon RDS Storage System At this point the instance is no longer accessible to the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 15 of 47 customer or AWS and unless the cu stomer requested a ‘delete with final snapshot copy’ the instance cannot be restored and will not be listed by any of the tools or APIs AWS Secure Destruction When a storage device has reached the end of its useful life AWS procedures include a decommis sioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practices Assurance approach The data sanitisation subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data sanitisation are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 16 of 47 25Equipment disposal Once equipment used to deliver a service reaches the end of its useful life it should be disposed of in a way that does not compromise the security of the service or consumer data stored in the service Implementation objectives Consumers should be sufficiently confident that: •All equipment potentially containing consumer data credentials or configuration information for the service is identified at the end of its life (or prior to being recycled) •Any components containing sensitive data are sanitised removed or destroyed as appropriate •Accounts or credentials specific to redundant equipment are revoked to reduce their value to an attacker https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 25 Equipment disposal Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence AWS uses techniques described in industry accepted standards to ensure that data is erased when resources are moved or re provisioned when they leave the service or when you request it to be erased When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practices Assurance approach The equipment protection subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to equipment protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 17 of 47 26 Physical resilience and availability Services have varying levels of resilience which will affect their ability to operate normally in the event of failures incidents or attacks A service without guarantees of availability may become unavailable potentially for prolonged periods with attendant business impacts Implementation objectives Consumers should be sufficiently confident that the availability commitment of the service including their ability to recover from outages meets their business needs https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 26 Physical resilience and availability Implementation approach The AWS Resiliency program encompasses the processes and procedures by which AWS identifies responds to and recovers from a major event or incident within our environment This program aims to provide you sufficient confidence that your business needs for availability commitment of the service including the ability to recover from outages are met This program builds upon the traditional a pproach of addressing contingency management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Avail ability Zones (AZs) and continuous infrastructure capacity planning AWS contingency plans and incident response playbooks are maintained and updated to reflect emerging continuity risks and lessons learned from past incidents Plans are tested and updated through the due course of business (at least monthly) and the AWS Resiliency plan is reviewed and approved by senior leadership annually AWS has identified critical system components required to maintain the availability of the system and recover service in the event of outage Critical system components (example: code bases) are backed up across multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineer ed to be highly reliable Common points of failures like generators and cooling equipment are not shared across Availability Zones Additionally Availability Zones are physically separate and designed such that even extremely uncommon disasters such as fires tornados or flooding should only affect a single Availability Zone AWS replicates critical system components across multiple Availability Zones and authoritative backups are maintained and monitored to ensure success replication AWS c ontinuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 18 of 47 assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Combined usage of Availability Zones and geographically distributed regions and numerous AWS services features provide customers with capabilities to design and architect resilient applications and platforms AWS customers benefit from the aforementioned resiliency features when the architectures are designed towards multiple failure scenarios Assurance approach The physical resilience and availability subprinciple and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical resilience and availability do not exist within the existing certification programs for them to be validated independently AWS publishes most uptotheminute information on service availability at statusawsamazoncom AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 19 of 47 Separation between consumers Separation between different consumers of the service prevents one malicious or compromised consumer from affecting the service or data of another Some of the important characteristics which affect the strength and implementation of the separation controls are: •The service model (eg IaaS PaaS SaaS ) of the cloud service •The deployment model (eg public private or community cloud) of the cloud service •The level of assurance available in the implementation of separation controls SaaS and P aaS services built upon IaaS offerings may inherit some of the separation properties of the underlying IaaS infrastructure Implementation objectives Consumers should: •Understand the types of consumers with which they share the service or platform •Have confidence that the service provides sufficient separation of their data and service from other consumers of the service •Have confidence that their management of the service is kept separate from other consumers (covered separately as part of Principle 9) https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle3 separation between consumers Principle 3: Separation between consumers Implementation approach Helping to protect the confidential ity integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence Using multiple levels of security AWS aims to provide you confidence that sufficient separation of data and management of the service exists from other consumers of the service Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS firewal ls and signed API calls Each of these items builds on the capabilities of the others This helps prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances that are as secure as possible without sacrificing flexibility of configuration Packet sniffing by other tenants: Virtual instances are designed to prevent other instances running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance While customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 20 of 47 can place interfaces into promiscuous mode the hypervisor will not deliver any traffic to them that is not addressed to them Even two virtual instances that are owned by the same customer located on the same physical host cannot lis ten to each other’s traffic While Amazon EC2 does provide protection against one customer inadvertently or maliciously attempting to view another’s data as a standard practice customers can encrypt sensitive traffic Customer instances have no access to raw disk devices but instead are presented with virtualized disks The AWS proprietary disk virtualization layer automatically erases every block of storage before making it available for use which protects one customer’s data from being unintentionally exposed to another Customers can further protect their data using traditional filesystem encryption mechanisms or in the case of Elastic Block Store (EBS) volumes enable AWS managed disk encryption Firewall Amazon EC2 provides a complete firewall solution referred to as a Security Group; this mandatory inbound firewall is configured in a default deny all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by any combination of protocol port and source (individual IP or Classless Inter Domain Routing (CIDR) subnet or another customer defined security group) Customers launching instances in a Virtual Private Cloud (VPC) also have access to additional features such as restricting outbound traffic from an instance A VPC is an isolated portion of the AWS cloud within which customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from another Amazon EC2 instances within a VPC are only accessible by a customer via an IPsec Virtual Private Network (VPN) connection that is established to the VPC Assurance approach The separation between consumer’s principle and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical resilience and availability do not exist within the existing certification programs for them to be validated independently AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 21 of 47 Governance framework The service provider should have a security governance framework that coordinates and directs their overall approach to the management of the service and information within it Implementation objectives The consumer has sufficient assurance that the governance framework and processes in place for the service are appropri ate for their intended use of it https://wwwgovuk/government/publicati ons/implem enting thecloud security principles/implementing thecloud security principles#principle 4governance framework Principle 4: Governance framework Implementation approach AWS’s Compliance and Security teams have e stablished an information security framework and policies based on the Control Objectives for Information and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v30 and the National Institute of Standards and Technology (NIST) Publication 80053 Rev 4 (Recommended Security Controls for Federal Information Systems) AWS maintain s the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as conformance to the information security policy As part of a globally accepted governance framework AWS has achieved ISO 27001:2013 certification of our Information Security Management System (ISMS) covering AWS infrastructure data centers and many services ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk assessments appropriate to everchanging threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This certification reinforces Amazon’s commitment to providin g significant information regarding our security controls and practices AWS’s ISO 27001:2013 certification includes all AWS data centers in all regions worldwide and AWS has established a formal program to maintain the certification AWS ha s an established information security organization managed by the AWS Security team and is led by the AWS Chief Information Security Officer (CISO) AWS Security establishes and maintains formal policies and procedures to delineate the minimum standards for logical acces s on the AWS platform and infrastructure hosts The policies also identify functional responsibilities for the administration of logical access and security Where applicable AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 22 of 47 Security leverages the information system framework and policies established and maintained by Amazon Corporate Information Security The aforementioned processes aim to provide you sufficient confidence that the governance framework and processes in place for the AWS services are appropriate for their intended use of it Assurance approach The governance framework principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCI DSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to governance framework are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 23 of 47 Operational security The service provider should have processes and procedures in place to ensure the operational security of the service The service will need to be operated and managed securely in order to impede detect or prevent attacks against it https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 5operational security Principle 5: Operational security 51 Configuration and change management Implementation approach Software AWS applies a systematic approach to managing change so that changes to customer impacting services are reviewed tested approved and well communicated Change management (CM) processes are based on Amazon change management guideli nes and tailored to the specifics of each AWS service These processes are documented and communicated to the necessary personnel by service team management The goal of AWS’ change management process is to prevent unintended service disruptions and maintain the integrity of service to the customer Change details are documented in Amazon’s CM workflow tool or another change management or deployment tool Changes deployed into production environments are: • Reviewed: peer reviews of the technical aspects of a change • Tested: when applied will behave as expected and not adversely impact performance • Approved: to provide appropriate oversight and understanding of business impact from service owners (management) Changes are typically pushed into production in a phased deployment starting with lowest impact sites Deployments are closely monitored so impact can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarming in place (eg latency availability fatal errors CPU utilization etc) Rollback procedures are documented in the Change Management (CM) ticket or other change management tool When p ossible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate Infrastructure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 24 of 47 AWS internally developed configuration management software is installed when new hardware is provisioned These tools are run on all hosts to validate that they are configured and software is installed in a standard manner based on host classes and updated regularly Only approved Systems Engineers and additional parties authorized through a permissions service may log in to the central configuration management servers Emergency nonroutine and other configuration changes to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to AWS infrastructure are done in such a manner that in the vast majority of cases they will not impact the customer and their service use AWS communicates with customers either via email or through the AWS Service Health Dashboard (http://statusawsamazoncom ) when service use may be adversely affected Assurance approach The configuration and change management s ubprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to configuration and change management are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 25 of 47 52 Vulnerability management Implementation approach Amazon Web Services is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud Protecting this infrastructure is AWS’s number one priority AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate many identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be 52 Vulnerability management Occasionally vulnerabilities will be discovered which if left unmitigated will pose an unacceptable risk to the service Robust vulnerabi lity management processes are required to identify triage and mitigate vulnerabilities Services which do not have effective vulnerability management processes will quickly become vulnerable to attack leaving them at risk of exploitation using publicly known methods and tools Implementation obj ectives Consumers should have confidence that: • Potential new threats vulnerabilities or exploitation techniques which could affect the service are assessed and corrective action is taken • Relevant sources of inform ation relating to threat vulnerability and exploitation technique information are monitored by the service provider • The severity of threats and vulnerabilities are considered within the context of the service and this information is used to prioritise implementation of mitigations • Known vulnerabilities within the service are tracked until suitable mitigations have been deployed through a suitable change management process • Service provider timescales for implementing mitigations to vulnerabilities found within the service are made available to them https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle5 operational security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form In addition the AWS control environment is subject to regular internal and external risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment Assurance approach The vulnerability management sub principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification progr ams These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to vulnerability management are validated independently at least annually under the certification programs 53 Protective monitoring Implementation approach Systems within AWS are extensively instrumented to monitor key operational and security metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key metrics When a threshold is crossed the AWS incident response process is initiated The Amazon Incident Response team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operates 24x7x365 coverage to detect incidents and manage the impact to resolution AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant Page 26 of 47 53 Protective monitoring Effective protective monitoring allows a service provider to detect and respond to attempted and successful attacks misuse and malfunction A service which does not effectively monitor for attacks and misuse will be unlikely to detect attacks (both successful and unsuccessful) and will be unable to quickly respond to potential compromises of consumer environments and data Implementation objectives Consumers should have confidence that: • Events generated in service components required to support effective identification of suspicious activity are collected and fed into an analysis system • Effective analysis systems are in place to identify and prioritise indications of potential malicious activity https://wwwgovuk/government/publications/implement ingthecloud security principles/implementing thecloud security principles#principle 5operational security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 27 of 47 54Incident management An incident management process allows a service provider to respond to a wide range of unexpected events that affect the delivery of the service to consumers Unless carefully preplanned incident management processes are in place poor decisions are likely to be made when incidents do occur Implementation objectives Consumers should have confidence that: •Incident management processes are in place for the service and are enacted in response to security incidents •Predefined processes are in place for responding to common types of incident and attack •A defined process and contact route exists for reporting of security incidents by consum ers and external entities •Security incidents of relevance to them will be reported to them in acceptable timescales and format https://wwwgovuk/government/publications/implemen tingthecloud security principles/implementing the cloud security principles#principle5 operational security telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks Assurance approach The protective monitoring subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certifica tion Schemes The controls in relation to protective monitoring are validated independently at least annually under the certification programs 54 Incident management Implementation approach AWS has implemented a formal documented incident response policy and program The policy addresses purpose scope roles responsibilities and management commitment AWS utilizes a three phased approach to manage incidents: 1 Activation and Notification Phase: Incidents for AWS begin with the detection of an event This can come from several sources including: a Metrics and alarms AWS maintains an exceptional situational awareness capability most issues are rapidly detected from 24x7x365 monitoring and alarming of real time metrics and service dashboards The majority of incidents are detected in this manner AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 28 of 47 utilizes early indicator alarms to proactively identify issues that may ultimately impact Customers b Trouble ticket entered by an AWS employee c Calls to the 24X7X365 technical support hotline If the event meets incident criteria then the relevant oncall support engineer will start an engagement utilizing AWS Event Management Tool system to start the engagement and page relevant program resolvers (eg Security team) The resolvers will perform an analysis of the incident to determine if additional resolvers should be engaged and to determine the approximate root cause 2 Recovery Phase the relevant resolvers will perform break fix to address the incident Once troubleshooting break fix and affected components are addressed the call leader will assign next steps in terms of follow up documentation and follow up actions and end the call engagement 3 Reconstitution Phase Once the relevant fix activities are complete the call leader will declar e that the recovery phase is complete Post mortem and deep root cause analysis of the incident will be assigned to the relevant team The results of the post mortem will be reviewed by relevant senior management and relevant actions such as design changes etc will be captured in a Correction of Errors (COE) document and tracked to completion In addition to the internal communication mechanisms detailed above AWS has also implemented various methods of external communication to support its customer base and community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard" is available and maintained by the customer support team to alert customers to any issues that may be of broad impact Assurance ap proach The incident management subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to incident management are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 29 of 47 Personnel security Consumers should be content with the level of security screening conducted on service provider staff with access to their information or with ability to affect their service Implementation objectives Service provider staff should be subject to personnel security screening and security education for their role Personnel within a cloud service provider with access to consumer data and systems need to be trustwort hy Service providers need to make clear how they screen and manage personnel within any privileged roles Personnel in those roles should understand their responsibilities and receive regular security training More thorough screening supported by adequa te training reduces the likelihood of accidental or malicious compromise of consumer data by service provider personnel https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 6personnel security Principle 6: Personnel security Implementation approach To ensure you are confident with the level of personnel checks AWS conducts criminal background checks as permitted by applicable law as part of preemployment screening practices for employees commensurate with the employee’s position and level of access to AWS facilities As part of the onboarding process all personnel supporting AWS systems and devices sign a nondisclosure agreement prior to being granted access Additionally as part of orientation personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy AWS maintains employee training programs to promote awareness of AWS information security requirements Every employee is provided with the Company’s Code of Business Conduct and Ethics and completes periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the estab lished policies Assurance approach The personnel security principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to personnel security are validated independently at least annually under the certification programs Based on the alternatives provided for This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 30 of 47 Secure deve lopment Services should be designed and developed to identify and mitigate threats to their security Services which are not designed securely may be vulnerable to security issues which could compromise consumer data cause loss of service or enable other malicious activity Implementation objectives Consumers should be content with the level of security screening conducted on service provider staff with access to their information or with ability to affect their service https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 7secure development selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 7: Secure development Implementation approach AWS’ development process follows secure software development best practices which includ e formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations In addition refer to ISO 27001:2013 standard Annex A domain 125 for further detail s AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard Assurance approach The secure development principle and related processes within AWS services are subject to audit at least an nually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure development are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Usin g AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Principle 8: Supply chain security Implementation approach In alignment with ISO 27001 standards AWS hardware assets are assigned an owner and tracked and monitored by the AWS personnel with AWS proprietary inventory management tools AWS procurement and supply chain teams maintain relationships with all AWS suppliers Personnel security requirements for thirdparty providers supporting AWS systems and devices are established in a Mutual NonDisclosure Agreement between AWS’ parent organization Amazoncom and the respective third party provider The Amazon Legal Counsel and the AWS Procurement team define AWS third party provider personnel security requirements in contract agreements with the third party provider All persons working with AWS information must at a minimum meet the screening process for preemployment background checks and sign a Non Disclosure Agreement (NDA) prior to being granted access to AWS information Refer to ISO 27001 standa rds; Annex A domain 71 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard Assurance approach The supply chain security principle and related processes within AWS services are subject to audit at least annually Supply chain security The service provider should ensure that its supply chain satisfactorily supports all of the security principles that the service claims to implement Cloud s ervices often rely upon third party products and services Those third parties can have an impact on the overall security of the services If this principle is not implemented then it is possible that supply chain compromise can undermine the security of the service and affect the implementation of other security principles Implementation objectives The consumer understands and accepts: • How their information is shared with or accessible by third party suppliers and their supply chains • How the service provider’s procurement processes place security requirements on third party suppliers and delivery partners • How the service provider manages security risks from third party suppliers and delivery partners • How the service provider manages the conformance of their suppliers with security requirements • How the service provider verifies that hardware and software used in the service are genuine and have not been tampered with https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 8supply chain security Page 31 of 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognized by ENISA under the Cloud Certification Schemes The controls in relation to supply chain security are validated independently at least annually under the certification programs Principle 9: Secure consumer management Implementation approach AWS Identity and Access Management (IAM) provides you with controls and features to provide confidence that authenticated and authorised users have access to specified services and interfaces AWS IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Services AWS IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate AWS IAM enables you to implement security best practices such as least privileged by granting unique Page 32 of 47 Secure consumer management Consumers should be provided with the tools required to help them securely manage their services Management interfaces and procedures are a vital security barrier in preventing unauthorised people accessing and altering consumers’ resources applications and data 91 Authentication of consumers to management interfaces and within support channels In order to maintain a secure service consumers need to be secure ly authenticated before being allowed to perform management activities report faults or request changes to the service These activities may be conducted through a service management web portal or through other support channels (such as telephone or emai l) and are likely to facilitate functions such as provisioning new service elements managing user accounts and managing consumer data It is important that service providers ensure any management requests which could have a security impact are performed over secure and authenticated channels If consumers are not strongly authenticated then an attacker posing as them could perform privileged actions undermining the security of their service or data Implementation objectives The consumer: • Has sufficient confidence that only authorised individuals from the consumer organisation are able to authenticate to and access management interfaces for the service (Principle 10 should be used to assess the risks of different approaches to meet this objective) • Has sufficient confidence that only authorised individuals from the consum er organisation are able to perform actions affecting the consumer’s service through support channels https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle 9 secure consumer management This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 33 of 47 credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform thei r jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have finegrained control over usage and software costs AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website: http://awsamazoncom/iam/ Delegate API Access to AWS Services Using IAM Roles AWS supports a very important and powerful use case with AWS Identity and Access Management (IAM) roles in combination with IAM users to enable cross account API access or delegate API access within an account This functionality gives better control and simplifies access management when managing services and resources across multiple AWS accounts You can enable cross account API access or delegate API access within an account or across multiple accounts without having to share longterm security credentials When you assume an IAM role you get a set of temporary security credentials that have the permissions associated with the role You use these temporary security credentials instead of your longterm security credentials in calls to AWS services Users interact with the service with the permissions granted to the IAM role assumed This reduces the potential attack surface area by having to create and manage fewer user credentials and users don’t have to remember multiple passwords Assurance approach The secure consumer management sub principle and related processes within AWS services are subje ct to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure consumer management are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 34 of 47 92Separation and access control within management interfaces Many cloud services are managed via web applications or APIs These interfaces are a key part of the service’s security If consumers are not adequately separated within management interfaces then one consumer may be able to affect the service or modify data belonging to another Implementation objectives The consumer: •Has sufficient confidence that other consumers cannot access modify or otherwise affect their service management •Can manage the risks of their own privileged access eg through ‘principle of least privilege’ providing the ability to constrain permissions given to consumer administrators •Understands how management interfaces are protected (see Princip le 11) and what functionality is available via those interfaces https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#pr inciple 9secure consumer management 92 Separation and access control within management interfaces Implementation approach API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidenti ality Amazon recommends always using SSLprotected API endpoints AWS IAM also enables you to further control what APIs a user has permissions to call to manage a specific resource Cross Account Access for better identity management In AWS assuming a role is a security principle that enables the user to assign policies that grant permissions to perform actions on AWS resources Unlike with a user account you don’t sign in to a role Instead you are already signed in as a user and then you switch to the role temporarily giving up your original user permissions and assuming the permissions of the role When you are done using the role you revert to your user’s permissions again As documented in the IAM User Guide an administrator creates a role in an account with resources to be managed and then specifies the AWS account IDs that are This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 35 of 47 trusted to use the role The administrators of the trusted accounts then grant permissions to specific users who can switch to the role Delegating access through roles this way can help you improve your security posture by simplif ying the management of credentials Instead of having to provide your users with sign in credentials for every account that they need to access users only need one set of signin credentials This leads to a reduction in the potential attack surface area by having fewer user credentials that you have to create and manage and your users don’t have to remember multiple passwords This feature can be used to help improve security within a single account When you create a typical user you give that user permissions to access all of the resources needed to do the job even the most sensitive and rarely accessed resources Ideally a user shouldn’t have any access to the sensitive and critical resources until actually needed to keep to the security principle of “least access” The ability to delegate permissions to a role and allow a user to switch to the role solves this dilemma Grant the user only those permissions that allow access to the normal day today managed resources and not to the sensitive resour ces Instead gr ant to a role the permissions to access sensitive resources The user can switch to the role when needing to use those resources and then switch right back to their user account This feature helps reduce the attack surface area Assurance approach The separation and access control within management interfaces subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to separation and access control within management interfaces are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 36 of 47 Identity and authenticatio n Consumer and service provider access to all service interfaces should be constrained to authenticated and authorised individuals All cloud services will have some requirement to identify and authenticate users wishing to access service interfaces Weak authentication or access control may allow unauthorised changes to a consumer’s service theft or modification of data or denial of service Implementation objectives Consumers should have sufficient confidence that identity and authentication controls ensure users are authorised to access specific interfaces https://wwwgovuk/government/publications/i mplementing thecloud secur ity principles/implementing thecloud security principles#principle 10identity and authentication Principle 10: Identity and authentication Implementation approach AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found on the Security Credentials page under ‘Your Account’ AWS also provides additional security options that enable you to further protect your AWS Account and control access: AWS Identity and Access Management (AWS IAM) key management and rotation temporary security credentials and multi factor authentication (MFA) AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website: http://awsamazoncom/iam/ Host Operating System: Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Guest Operating S ystem: Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not h ave any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling password only access to your guests and utilizing some form of multi factor authentication to gain access to your instances (or at a minimum certificate based SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a per user basis For example if the guest OS is Linux This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 37 of 47 after hardening your instance you should utilize certifi cate based SSHv2 to access the virtual instance disable remote root login use command line logging and use ‘sudo ’ for privilege escalation You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to the EC2 instances Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance AWS IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have finegrained control over usage and software costs Assurance approach The identity and authentication principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to identity and authentication are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Principle 11: External interface protection Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographically dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed Secure Network Architecture Network devices including firewall and other boundary devices are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system services ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are automatically pushed using AWS’s ACLManage tool to help ensure these managed interfaces enforce the most uptodate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establis h a secure communication session with your storage or compute instances within AWS In addition AWS has implemented network devices External interface protection All external or less trusted interfaces of the service should be identified and have appropriate protections to defend against attacks through them If an interface is exposed to consumers or outsiders and it is not sufficiently robust then it could be subverted by attackers in order to gain access to the service or data within it If the interfaces exposed include private interfaces (such as management interfaces) then the impact may be more significant Consumers can use different models to connect to cloud services which expose their enterpris e systems to varyi ng levels of risk Implementation objectives • The consumer understands how to safely connect to the service whilst minimising risk to the consumer’s systems • The consumer understands what physical and logical interfaces their information is available from • The consumer has sufficient confidence that protections are in place to control access to their data • The consumer has sufficient confidence that the service can determine the identity of connecting users and services to an appropriate level for the data or function being accessed https://wwwgovuk/government/publicati ons/impleme nting thecloud security principles/implementing the cloudsecurity principles#principle11external interface protection This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 that are dedicated to managing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmission Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Socket s Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional layers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center Network Monitoring and Protection AWS utilizes a wide variety of automate d monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at ingress and egress communication points These tools monitor server and n etwork usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An oncall schedule is used so personnel are always available to respond to operationa l issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in handling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operational issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in the future Implementation of the preventative measures is tracked during weekly operations meetings AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant Page 39 of 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 41 of 47 telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks Assurance approach The external interface protection principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to external interface protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provide r Assertion in respect of region specific requirements Principle 12: Secure service administration Implementation approach User Access Procedures exist so that Amazon employee and contractor user accounts are added modified or disabled in a timely manner and are reviewed on a periodic basis In addition password complexity settings for user authentication to AWS systems are managed in compliance with Amazon’s Corporate Password Policy Account Provisioning The responsibility for provisioning employee and contractor access is shared across Human Resources (HR) Corporate Operations and Service Owners A standard employee or contractor account with minimum privileges is provisioned in a disabled state when a hiring manager submits his or her new employe e or contractor onboarding request in Amazon’s HR system The account is automatically enabled when the employee’s record is activated in Secure service administration The methods used by the service provider’s administrators to manage the operational service should be designed to mitigate any risk of exploitation that could undermine the security of the service The security of a cloud service is closely tied to the security of the service provider’s administration systems Access to service administra tion systems gives an attacker high levels of privilege and the ability to affect the security of the service Therefore the design implementation and management of administration systems should reflect their higher value to an attacker Implementation objectives Consumers have sufficient confidence that the technical approach the service provider uses to manage the service does not put their data or service at risk https://wwwgovuk/government/publications/impleme nting thecloud security principles/implementing the cloudsecurity principles#principle12secure service administration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 42 of 47 Amazon’s HR system First time passwords are set to a unique value and are required to be changed on first use Access to other resources including Services Host Network devices and Windows and UNIX groups is explicitly approved in Amazon’s proprietary permission management system by the appropriate owner or manager Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automatically revoked Periodic Account Review Accounts are reviewed every 90 days; explicit reapproval is required or access to the resource is automatically revoked Access Removal Access is automatically revoked when an employee’s record is terminated in Amazon’s HR system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems Password Policy Access and administration of logical security for Amazon relies on user IDs passwords and Kerberos to authenticate users to services resources and devices as well as to authorize the appropriate level of access for the user AWS Security has established a password policy with required configurations and expiration intervals Administrators with a business need to access the management plane are required to use multifactor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Assurance approach The secure service administration principl e and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure service administration are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 43 of 47 Audit information provision to consumers Consumers should be provided with the audit records they need to monitor access to their service and the data held within it The type of audit information available to consumers will have a d irect impact on their ability to detect and respond to inappropriate or malicious usage of their service or data within reasonable timescales Implementation objectives Consumers are: •Aware of the audit information that will be provided to them how and when it will be made available to them the format of the data and the retention period associated with it •Confident that the audit information available will allow them to meet their needs for investigating misuse or incidents https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 13audit information provision toconsumers Principle 13: Audit information provision to consumers Implementation approach AWS CloudTrail is a service that provides audit records for AWS customers and delivers audit information in the form of log files to a specified storage bucket The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service Cloud Trail provides a history of AWS API calls for customer accounts including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as AWS CloudFormation) The AWS API call history produced by CloudTr ail enables security analysis resource change tracking and compliance auditing The logfile objects written to S3 are granted full control to the bucket owner The bucket owner thus has full control over whether to share the logs with anyone else This feature enables AWS customers and provides confidence to meet their needs for investigating service misuse or incidents More details on AWS CloudTrail and further information on audit records can be requested at http://awsamazoncom/cloudtrail A latest version of CloudTrail User Guide is available at http://awsdocss3amazonawscom/awscloudtrail/latest/awscloudtrail ugpdf Assurance approach The audit information provision to the consumer’s principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 44 of 47 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to audit information provision to consumers are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 14: Secure use of the service by the consumer Implementation approach AWS has implemented various methods of external communication to support you and the wider customer base and the community AWS has published a public Acceptable Use Policy that provides guidan ce and informs consumers on acceptable use of AWS services This policy includes guidance on illegal harmful or offensive content security violations network abuse and e mail or message abuse with information on monitoring and enforcement of the policy Additionally guidance is provided on reporting violations of the Acceptable Use Policy Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard" is Secure use of the service by the consumer Consumers have certain responsibilities when using a cloud service in order for their use of it to remain secure and for their data to be adequately protected The security of cloud services and the data held within them can be undermined by poor use of the service by consumers The extent of the responsibility on the consumer for secure use of the service will vary depending on the deployment models of the cloud service specific features of an individual service and the scenario in which the consumers intend to the use the service IaaS and PaaS offerings are likely to require the consumer to be responsible for significant aspects of the security of their service Imple mentation objectives • The consumer understands any service configuration options available to them and the security implications of choices they make • The consumer understands the security requirements on their processes uses and infrastructure related to the use of the service The consumer can educate those administrating and using the service in how to use it safely and securely https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle 14secure useoftheserviceby the consumer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 45 of 47 available and maintained by the customer support team to alert customers to any issues that may be of broad impact The AWS Security Center is available to provide you with security and compliance details about AWS Customers can also subscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues Using the Trusted Advisor Tool Some AWS Support plans include access to the Trusted Advisor tool which offers a one view snapshot of your service and helps identify common security misconfigurations suggestions for improving system performance and underut ilized resources Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses This includes ports 22 (SSH) 23 (Telnet) 3389 (RDP) and 5500 (VNC) • Limited access to common database ports This includes ports 1433 (MSSQL Server) 1434 (MSSQL Monitor) 3306 (MySQL) Oracle (1521) and 5432 (PostgreSQL) • IAM is configured to help ensure secure access control of AWS resources • Multi factor authentication (MFA) token is enabled to provide twofactor authentication for the root AWS account Assurance approach The secure use of the service by the consumer principle and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to secure use of the service by the consumer do not exist within the existing certification programs for them to be validated independently AWS publishe s guidance on configuration options and the relative impacts on security regularly through various communication channels like local summit sessions webinars blogs and training and guidance documents AWS uses Service Provider Assertion in respect of regionspecific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 46 of 47 Conclusion The AWS cloud platform provides a number of important benefits to UK public sector organisations and enables you to meet the objectives of the fourteen Cloud Security Principles While AWS delivers these benefits and advantages through our services and features the individual public sector organisations are ultimately responsible for risk management decisions relating to the use of secure cloud services for OFFICIAL information Using the information presented in this whitepaper we encourage you to use AWS services for your organisations to manage security and the related risks appropriately For AWS security is always our top priority We deliver services to hundreds of thousands of businesses including enterpri ses educational institutions and government agencies in over 190 countries Our customers include government agencies financial services and healthcare providers who leverage the benefits of AWS while retaining control and responsibility for their data including some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it and the security configuration environment AWS customers can build their own secure applications and store content securely on AWS Additional Resources To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at: • AW S Compliance: http://awsamazoncom/compliance • AWS Security Center: http://awsamazoncom/security AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with best practices for building secure and reliable cloud based applications using AWS technology Further information on AWS certifications is available at http://awsamazoncom/certification/ If further information is required please contact AWS at: https://awsamazoncom/contact us/ or contact the local AWS account representative This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 47 of 47 Appendix – AWS Platform Benefits When designing and implementing large cloud based applications it’s important to consider how infrastructure will be managed to ensure the cost and complexity of running such systems is minimized When organisations first begin using AWS platform it is easy to manage EC2 instances just like regular virtualised servers running in a data center However as the architecture evolves and changes are made over time the instances will inevitably begin to diverge from their original specification which can lead to inconsistencies with other instances in the same environment This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments Ultimately it will lead to service issues because these environments will become less predictable and more difficult to maintain The AWS platform provides a rich and diverse set of tools to address this challenge with a different approach By using the AWS platform and features public sector organisations can specify and manage the desired end state of the infrastructure independently of the instances and other running components When technology teams start to think of infrastructure as being defined independently of the running instances and other components in the environments they can take greater advantage of the benefits of dynamic cloud environments: Software def ined infrastructure – By defining infrastructure using a set of software artifacts many of the tools and techniques that are used when developing software components can be leveraged This includes managing the evolution of infrastructure in a version control system as well as using continuous integration (CI) processes to continually test and validate infrastructure changes before deploying them to production Auto Scaling and selfhealing – If new instances are provisioned automatically from a consistent specification Auto Scaling groups can be used to manage the number of instances in an EC2 fleet For example a condition to add new EC2 instances in increments can be set to the Auto Scaling group when the average utilization of EC2 fleet is high Auto Scaling can also be used to detect impaired EC2 instances and unhealthy applications and replace the instances without intervention Fast environment provisioning – Consistent environments can be provisioned quickly and easily which opens up new ways of working within teams For example a new environment can be provisioned to allow testers to validate a new version of an application in their own personal test environments that are isolated from other changes Reduce costs – Now that environments can be provisioned quickly the option is always there to remove them when they are no longer needed This reduces costs because customers are charged only for the resources that are used This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 48 of 47 Blue green deployments – Application teams can deploy new versions of application by provisioning new instances (containing a new version of the code) beside the existing infrastructure Traffic can be switched between environments in an approach known as blue green deployments This has many benefits over traditional deployment strategies including the ability to quickly and easily roll back a deployment in the event of an issue In addition to the implementation and assurance approaches detailed in this whitepaper for each Cloud Security Principle public sector organisations adopting cloud technologies should take into consideration the additional benefits of AWS platform within the risk assessment and management frameworks Whilst a secure and compliant public cloud environment is necessary for handling government OFFICIAL information the AWS platform and securi ty features that scale and enable resilience to change are equally important to consider
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_New_Zealand_Privacy_Considerations
|
Using AWS in the Context of New Zealand Privacy Considerations First published Septembe r 2014 Updated August 17 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppl iers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Considerations relevant to privacy and data protection 2 AWS shared responsibility approach to managing cloud security 3 How is customer content secured? 3 What does the shared responsibility model mean for the security of c ustomer content? 4 Understanding security OF the cloud 4 Understanding security IN the cloud 5 AWS Regions: Where will content be stored? 7 How can customers select their Region(s)? 8 Transfer of personal information cross border 9 Who can access customer content? 10 Customer control over content 10 AWS access to customer content 10 Government rights of access 10 Privacy and data protection in New Zealand: The Privacy Act 11 Privacy breaches 19 Considerations 20 Further reading 21 AWS Artifact 22 Document revisions 22 Abstract This document provides information to assist customers who want to use Amazon Web Services (AWS) to store or process content containing personal information in the context of key privacy considerations and the New Zealand Privacy Act 2020 (NZ) I t helps customers understand: • The way AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS servicesAmazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 1 Introduction This whitepaper focuses on typical questions asked by AWS customers when they are considering the implications o f the New Zealand Privacy Act on their use of AWS services to store or process content containing personal information There will also be other relevant considerations for each customer to address For example a customer may need to comply with industry specific requirements and the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirement s relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS services For exam ple a customer’s content includes objects that the customer stores using Amazon Simple Storage Service (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal information relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with us governing the use of AWS services apply to customer content Customer content does not include information that a customer provides to us in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addre sses and billing information —we refer to this as account information and it is governed by the AWS Privacy Notice Our business changes constantly and our Privacy Notice may also change We recommend checkin g our website frequently to see recent changes Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 2 Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will c ontent be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated syst ems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS custome r maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The AWS Region or Regions where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS Shared Responsibility Model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 3 AWS shared responsibility approach to managing cloud security How is customer content secured ? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other security related features The customer will g enerally connect to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide these connections and they are therefore part of the customer’s area of responsibility Custo mers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown in Figure 1 Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 4 Figure 1 – AWS Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – security of the cloud • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – securi ty in the cloud While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS Cloud infrastructure has been architected to be one of the most flexible and Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 5 secure cloud computing environments available designed to provide optimum availability w hile providing complete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic i n that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical Region in which they store their content AWS’ world class highly secure data centers utilize state oftheart electron ic surveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Processes for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated techn ical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 21 and 32 reports I SO 270013 270174 270185 and 900 16 certifications and PCI DSS7 Attestation of Compliance Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content Thes e reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested on the AWS Compliance Contact Us page For m ore information on AWS compliance certifications reports and alignment with best practices and standards see AWS Compliance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process u sing AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have compl ete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encry pt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 6 customer configuration settings as these settings are determined and controlled by the customer AWS customers have t he complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and h ow security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through e ncryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and c ontrols including a wide variety of thirdparty security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important fact ors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service (AWS KMS) and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and run security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and conduct Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 7 security assessments according to their own pre ferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regi ons: Where will content be stored? AWS data centers are built in clusters in various global Regions We refer to each of our data center clusters in a given country as an AWS Region Customers have access to a number of AWS Regions around the world8 including an Asia Pacific (Sydney) Region Customers can choose to use one Region all Regions or any combination of AWS Regions Figure 2 shows AWS Region locations as of April 20219 Figure 2 – AWS global Regions AWS cu stomers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locatio ns of their choice For example AWS customers in New Zealand can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Sydney) Region and store their content onshore in Australia if this is their preferred location If the customer makes this choice AWS will not move their content from Australia without the customer’s consent except as legally required Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 8 Customers always retain control of which AWS Regions are used to store and process content AWS only stores and pr ocesses each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required How can customers select their Region(s)? When us ing the AWS Management Console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where they want to use AWS services Figure 3 provides an example of the AWS Region select ion menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Figure 3 – Selecting AWS Global Regions in the AWS Management Console Customers can prescribe the AWS Region to be used for their AWS resources Amazon Virtual Private Cloud ( VPC) lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon V PC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data center Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 9 Any resources launched by the customer into the VPC will be located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific (Sydney) Region all resources launched into that VPC would only reside in the Asia Pacific (Sydney) Region This option can also be leveraged for other AWS Regions Transfer of personal information cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provide s customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include adherence to the CISPE code of conduct granular data access controls monitoring and l ogging tools encryption key management audit capability adherence to IT security standards and Cloud Computing Compliance Controls Catalogue ( C5) attestations For additional information visit the AWS General Data Protection Regulation (GDPR) Center and see the Navigating GDPR Compliance on AWS whitepaper When using AW S services customer s may choose to transfer content containing personal information cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/8 7/EU (often referred to as Model Clauses ) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area (EEA) With our EU Data Processing Addendum and Model Clauses AWS customers who want to transfer personal data —whether established in Europe or a global company operating in the European Economic Area —can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Amazon Web Services Using AWS in the Context of New Z ealand Privacy Considerations 10 Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment Customers can perform the following: • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest; and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials This enables AWS customers to control the entire lifecycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and disposal AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features such as AWS KMS managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally req uired AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content h eld in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are a n important consideration for some customers However customers also Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 11 need to consider whether laws in other jurisdictions may apply to them Customers should seek advice to understand the application of relevant laws to their business and operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and bindin g order such as a subpoena or a court order or as is otherwise required by applicable law Nongovernmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear indication o f illegal conduct in connection with the use of AWS services For additional information see the Law enforcement Information Requests page Privacy and data protection in New Zealand: The Privacy Act This section discusses aspects of the New Zealand Privacy Act 2020 (NZ) (Privacy Act) effective from December 1 2020 The main requirements in the Privacy Act for handling personal information are set out in the Information Privacy Principles (IPPs) The IPPs impose requirements for collecting managing using disclosing and otherwise handling personal information collected from individuals in New Zealand The New Zealand Privacy Commissioner may also issue code s of practice which apply prescribe or modify the application of IPPs in relation to an activity industry or profession (or classes of them) The Privacy Act recognizes a distinction between “principals ” and “agents ” Where an entity (the agent ) holds personal information for the sole purpose of storing or processing personal information on behalf of another entity (the principal ) and does not use or disclose the personal information for its own purposes the information is deemed to be held by the principal In those circumstances primary responsibility for compliance with the IPPs will rest with the principal Amazon Web Services Using AWS in the Context of New Zealand P rivacy Considerations 12 AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties i nvolved in the data lifecycle of personal information included in customer content stored or processed using AWS services For simplicity the guidance included in the table below assumes that in the context of the customer content stored or processed usi ng the AWS services the customer: • Collects personal information from its end users and determines the purpose for which the customer requires and will use the information • Has the capacity to control who can access update and use the personal information • Manages the relationship with the individual about whom the personal information relates including by communicating with the individual as required to comply with any re levant disclosure and consent requirements • Transfers the content into the AWS Region it selects AWS does not receive customer content in New Zealand Customers may in fact work with or rely on third parties to discharge these responsibilities but the cu stomer rather than AWS would manage its relationships with those third parties We summarize in the following table the IPP requirements that are particularly important for customers to consider if using AWS to store personal information collected from individuals in New Zealand We also discuss aspects of the AWS services relevant to these IPPs Table 1 — IPP requirements and considerations IPP Summary of IPP requirements Considerations IPP 1 – Purpose of collection of personal information Personal information may be collected only for lawful and necessary purposes Customer — The customer determines and controls when how and why it collects personal information from individuals and decides whether it will include that personal informatio n in IPP 2 – Source of personal information Persona l information may only be collected directly from the individual unless an exception applies Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 13 IPP 3 – Collection of Information Reasonable steps must be taken to ensure that when an individual’s personal information is collected they are aware of the purposes for which it is collected and certain other matters customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects personal information to the relevant individuals ; obtains the personal information from a permitted source ; and that it only uses the personal information for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal information the custom er stores or processes on AWS and therefore the customer is able to communicate directly with them about collection of their personal information The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal information AWS — AWS does not know when a customer chooses to upload to AWS content that may contain personal information AWS also does not collect personal informatio n from individuals whose personal information is included in content a customer stores or processes using the AWS services and AWS has no IPP 4 – Manner of collection of personal information Personal information may only be collected fairly and in a lawful and non intrusive manner Amazon Web Services Using AWS in the Context of New Zealand Privacy C onsiderations 14 contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only accesses or uses customer content as necessary to provide the AWS services and does not access or use customer content for any other purpose without the customer’s consent IPP 5 – Storage and security of personal information Reasonable steps must be taken to protect the security of personal information Customer — Customers are responsible for security in the cloud including security of their content (and personal information included in their content) AWS — AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Practices for Security Identity & Compliance IPP 6 – Access to personal information Individuals are entitled to access personal information about them unless an exception applies Customer — Customers are responsible for their content in the cloud When a customer chooses to store or process content containing personal information using the AWS services the customer has control over the quality of that content and the customer retains access to and can correct it IPP 7 – Correction of personal information Individuals may request correction of personal information about them Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 15 In addition as between the customer and AWS the customer has a relationship with the individuals whose personal information is included in customer content stored or processed using the AWS services Therefore the customer rather than AWS is able to work with relevant individuals to provide them access to and the ability to correct their personal information AWS — AWS uses customer content to provide the AWS services selected by each customer to that customer and does not us e customer content for other purposes without the customer’s consent AWS has no contact with the individuals whose personal information is included in content a customer stores or processes using the AWS services Given this and the level of control cust omers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal information IPP 8 Accuracy to be checked before use or disclosure Reasonable steps must be taken to check accuracy completeness and relevance of personal information before it is used or disclosed Customer — When a customer chooses to store or process content containing personal information using the AWS services the customer has control over the quality of that content and the customer retains access to and can Amazon Web Services Using AWS in the Context of New Zealand Privacy Considera tions 16 correct it This means th at the customer must take all required steps to ensure that personal information included in customer content is accurate complete not misleading and kept up to date AWS — AWS does not collect personal information from individuals whose personal inform ation is included in content a customer stores or processes using the AWS services and AWS has no contact with those individuals Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to confirm the accuracy completeness and relevance of personal information before it is used or disclosed IPP 9 Personal information must not be kept longer than necessary Personal information should not be kept for longer than is required for the purposes for which the information may be lawfully used Customer — Because only the customer knows the purposes for collecting the personal information contained in the customer content it stores or processes using AWS services the custo mer is responsible for ensuring that such personal information is not kept for longer than required The customer should delete the personal information when it is no longer needed AWS — AWS services provide the customer with controls to enable the customer to delete content Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 17 stored on AWS as described in AWS documentation IPP 10 Limits on use of personal information Personal information may only be used or disclosed for the purpose for which it was collected for reasonable directly related purposes in a way which does not identify the individual or if another exception applies Customer — Given that the customer determines the purpose for collecting personal information and controls the use and disclosure of content that contains personal information the customer is responsible for ensuring how such personal information is used or disclosed The customer also controls the format structure and security of its content stored or processed using A WS services AWS — AWS uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes without the customer’s consent General — AWS services are structured such that custome rs maintain ownership and control of their content when using the AWS services regardless of which AWS Region they use IPP 11 Limits on disclosure of personal information IPP 12 – Disclosure of personal information outside New Zealand Personal information may only be disclosed outside of New Zealand if the recipient is subject to similar safeguards to those under the Privacy Act Customer — The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single AWS Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what AWS Region they Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 18 use for their content The customer shoul d consider whether it should disclose to individuals the locations in which it stores or processes their personal information and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer an d AWS the customer has a relationship with the individuals whose personal information is included in customer content stored or processed using the AWS services and therefore the customer is able to communicate directly with them about such matters AWS — AWS only stores and processes each customer’s content in the AWS Region(s) and using the services chosen by that customer and otherwise will not move customer content without that customer’s consent except as legally required If a customer chooses to store content in more than one AWS Region or copy or move content between AWS Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General — It is important to highlight that an entity is only required to comply with IPP 12 when that entity discloses personal information to an overseas person or entity The Privacy Act states that where an agency (Entity A) Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 19 Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility holds information as an agent for anoth er agency (Entity B) for example for safe custody or processing then (i) the personal information is to be treated as being held by Entity B and not Entity A (ii) the transfer of the information to Entity A by Entity B is not a use or disclosure of th e information by Entity B and (iii) the transfer of the information and any information derived from the processing of that information to Entity B by Entity A is not a use or disclosure of the information by Entity A It also does not matter whether Entity A is outside New Zealand or holds the information outside New Zealand Using the AWS services to store or process personal information outside New Zealand at the choice of the customer may not be a disclosure of customer content Customers should seek legal advice regarding this if they feel it may be relevant to the way they propose to use the AWS services Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 20 A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Customers control access keys and determine who is authorized to access their AWS account AWS does not have visibility of access keys or who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys The Privacy Act introduced a notifiable privacy breach scheme that is effective from December 1 2020 The scheme aims to give affected individuals the opportunity to take steps to protect their personal information following a privacy breach that has caused or is likely to cause serious harm AWS offers two types of New Zealand Notifiable Data Breaches ( NZNDB ) Addend a to customers who are subject to the Privacy Act and are using AWS to store and process personal information covered by the scheme The NZNDB Addend a address customers’ need for notification if a security event affects their data The first ty pe the Account NZNDB Addendum applies only to the specific individual account that accepts the Account NZNDB Addendum The Account NZNDB Addendum must be separately accepted for each AWS account that a customer requires to be covered The second type th e Organizations NZNDB Addendum once accepted by a management account in AWS Organizations applies to the management account and all member accounts in that AWS Organization If a customer does not need or want to take advantage of the Organizations NZNDB Addendum they can still accept the Account NZNDB Addendum for individual accounts AWS has made both types of NZNDB Addendum available online as click through agreements in AWS Artifact (the customer facing audit and compliance portal that can be accessed from the AWS management console) In AWS Artifact customers can review and activate the relevant NZNDB Addendum for those AWS accounts they use to store and process personal information covered by t he scheme NZNDB Addend a frequently asked questions are available online at AWS Artifacts FAQs Considerations This whitepaper does not discuss other New Zealand privacy laws aside from the Privacy Act that may also be relevant to customers including state based laws and industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which it operates the type of Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 21 content they want to store where or from whom the content originates and where the content will be stored Customers concerned about their New Zealand privacy regulatory obl igations should first ensure they identify and understand the requirements applying to them and seek appropriate advice At AWS security is always our top priority We deliver services to millions of active customers including enterprises educational i nstitutions and government agencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information AWS services are designed to give customers flexibilit y over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Further reading To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on t he AWS website This material can be found at AWS Compliance and AWS Cloud Security As of the date of publication specific whitepapers about privacy and da ta protection considerations are also available for the following countries or regions : • Australia • California • Germany • Hong Kong • Japan • Malaysia • Singapore • Philippines • Using AWS in the Context of Common Privacy & Data Protection Considera tions Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 22 AWS Artifact Customers can review and download reports and details about more than 2500 security controls by using AWS Artifact the automated compliance reporting portal available in the AWS Manageme nt Console The AWS Artifact portal provides on demand access to AWS security and compliance documents including the NZNDB Addend a and certifications from accreditation bodies across geographies and compliance verticals AWS also offers training to help c ustomers learn how to design develop and operate available efficient and secure applications on the AWS Cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes For more information on AWS training see AWS Training and Certification AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology For more information on AWS certifications see AWS Certification If you require further information please contact AWS or contact your local AWS account representative Document revisions Date Description August 17 2021 Updated for technical accuracy November 2020 Fifth publication May 2018 Fourth publication December 2016 Third publication January 2016 Second publication September 2014 First publication Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 23 Notes 1 https://awsamazoncom/compliance/soc faqs/ 2 http://d0awsstaticcom/whitepapers/compliance/soc3_amazon_web_servicespdf 3 http://awsamazoncom /compliance/iso 27001 faqs/ 4 http://awsamazoncom/compliance/iso 27017 faqs/ 5 http://awsamazoncom/compliance/iso 27018 faqs/ 6 https://awsamazoncom/compliance/iso 9001 faqs/ 7 https://awsamazoncom/compliance/pci dsslevel1faqs/ 8 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addr essing their specific regulatory and compliance requirements AWS China (Beijing) and AWS China (Ningxia) are also isolated AWS Region s Customers who want to use the AWS China (Beijing) and AWS China (Ningxia) Region s are required to sign up for a separat e set of account credentials unique to the China (Beijing) and China (Ningxia) Region s 9 For a real time location map see https://awsamazoncom/about aws/global infrastructure/
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_PhilippinesPrivacy_Considerations
|
Using AWS in the Context of Philippines Privacy Considerations Published March 1 2018 Updated September 30 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates supp liers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview ii Scope ii Customer Content: Considerations relevant to privacy and data protection iii AWS shared responsibility approach to managing cloud security iv Understanding security OF the cloud v Understanding security IN the cloud vi AWS Regions: Where will content be stored? viii How can customers select their Region(s)? viii Transfer of personal data cross border x Who can access customer content? xi Customer control over content xi AWS access to customer content xi Government rights of access xi AWS policy on granting government access xii Privacy and Data Protection in the Philippines: Philippine Data Privacy Laws xiii Privacy br eaches xviii Other considerations xix Additional resources xix Further reading xix Document history xx About this Guide This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of the Philippine Data Privacy Act of 2012 and the Implementing Rules and Regulations (the “ Philippine Privacy L aws”) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Page ii Overview This document provides information to assist customers who want to use AWS to store or process content containing pers onal data in the context of the Philippine Data Privacy Act of 2012 and the Implementing Rules and Regulations (the “ Philippine Privacy Laws ”) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Scope This whitepaper fo cuses on typical questions asked by AWS customers when they are considering the implications of the Philippine Privacy Laws for their use of AWS services to store or process content containing personal data There will also be other relevant considerations for each customer to address for example a customer may need to comply with industry specific requirements the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This paper i s provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of pr ivacy and data protection requirements and on applicable laws and other requirements relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other conten t that a customer or any end user stores or processes using AWS services For example a customer’s content includes objects that the customer stores using Amazon Simple Storage Service files stored on an Amazon Elastic Block Store volume or the conte nts of an Amazon DynamoDB database table Such content may but will not necessarily include personal data relating to that customer its end users or third parties Customers maintain ownership and control of their content and select which AWS services can process store and host their content AWS does not access or use customer content without Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations iii customer consent except as necessary to comply with a law or binding order of a governmental body The terms of the AWS Customer Agreement or any other relevan t agreement with us governing the use of AWS services apply to customer content Customer content does not include data that a customer provides to us in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —we refer to this as account information and it is governed by the AWS Privacy Notice Customer Content: Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional thirdparty hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS Shared Responsibility Model This model is fundamental to Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations iv understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services AWS shared responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization lay er down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other security related features The customer will generally connect to the AWS environment through services the customer acquires from third parties (for example inte rnet service providers) AWS does not provide these connections and they are therefore part of the customer's area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown the following figure: Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations v Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data cent er Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing complete customer segregation It provides extremely scalable highly reliable Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations vi services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s world class highly secure data cent ers utili ze state ofthe art electronic su rveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authori zed strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infr astructure and services see Best Practices for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated technical a nd physical measures against unauthori zed access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS compliance reports Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifical ly address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and report s can be requested at https://pagesawscloudcom/compliance contact ushtml More information on AWS compliance certifications reports and alignment with best practices and standard s can be found at AWS Compliance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content the y store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Custo mers also have complete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content includin g whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations vii architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level an d through encryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security t ools and controls including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and acce ss management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies enabling Multi Factor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers r ather than AWS control these important factors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating syst em applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively includin g AWS Key Management Service and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and execute security assessments of their organi zation’s use of AWS services AWS pub lishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execute security assessments according to their own preferences and can request permission to Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations viii conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an AWS Region Customers have access to a number of AWS Regions around the world1 Customers can choose to use one Region all Regions or any combination of AWS Regions For a list of AWS Regions and a real time location map see Global Infrastructure AWS customers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in Singapore can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Singapore) Region and store their content onshore in S ingapore if this is their preferred location If the customer makes this choice AWS will not move their content from Singapore without the customer’s consent except as legally required Customers always retain control of which AWS Region(s) are used to store and process content AWS only stores and processes each customer’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required How can customers select their Region(s)? When using the AWS Management Console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services 1 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements AWS China(Beijing) is also an isolated AWS Region Customers who wish to use the AWS China (Beijing) Region are required to sign up for a separate set of account credentials unique to the China (Beijing) Region Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations ix Error! Reference source not found provides an example of the AWS Region selection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Selecting AWS Global Regions in the AWS Management Console Customers can also prescribe the AWS Region to be used for their AWS resources Amazon Virtual Private Cloud (Amazon VPC) lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon VPC customers can define a virtual network topology that closely resembles a traditional network that migh t operate in their own data cent er Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations x Any compute and other resources launched by the customer into the VPC will be located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific (Singapore) Region and providing a li nk (either a VPN or AWS Direct Connect ) back to the customer's data cent er all compute resources launched into that VPC would only reside in the Asia Pacific (Singapore) Region This option can also be leveraged for other AWS Regions Transfer of personal data cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provides customers with services and resources to help them comply with GDPR requirements that may apply to the ir operations These include AWS’ adherence to the CISPE code of conduct granular data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS’s C5 attestations For more info rmation see the AWS General Data Protection Regulation (GDPR) Center and see the Navigati ng GDPR Compliance on AWS whitepaper When using AWS services customers may choose to transfer content containing personal data cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processin g Addendum that includes the Standard Contractual Clauses 2010/87/EU (often referred to as “Model Clauses”) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Ar ea (EEA) such as the Philippines With our EU Data Processing Addendum and Model Clauses AWS customers wishing to transfer personal data —whether established in Europe or a global company operating in the European Economic Area —can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xi Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment They can: • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own enc ryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data soverei gnty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are an important consideration for some customers However customers also need to co nsider whether Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xii laws in other jurisdictions may apply to them Customers should seek advice to understand the application of relevant laws to their business and operations When concerns or questions are raised about the rights of domestic or foreign governments to seek access to content stored in the cloud it is important to understand that relevant government bodies may have rights to issue requests for such content under laws that already apply to the customer For example a company doing business in Country X could be subject to a legal request for information even if the content is stored in Country Y Typically a government agency seeking access to the data of an entity will address any request for information directly to that entity rather tha n to the cloud provider The Philippines like m ost countries has legislation that enables law enforcement and government security bodies to seek access to information The Philippines also has processes (including Mutual Legal Assistance Treaties) to ena ble the transfer of information to other countries in response to appropriate legal requests for information (eg relating to criminal acts) However it is important to remember that the relevant laws will contain criteria that must be satisfied in order for the relevant law enforcement body to make a valid request For example the government agency seeking access may need to show it has a valid reason for requiring a party to provide access to content and may need to obtain a court order or warrant Many countries have data access laws which purport to apply extraterritorially An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is similar to laws in othe r developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law including for example that the request is related to legitimate investigations The Patriot Act generally applies to all companies with an operation in the US irrespective of where they are incorporated and/or operating globally and irrespective of whether the information is stored in the cloud in an on site data center or in physical records Companies headquartered or operating outside the United States which also do business in the United States may find t hey are subject to the Patriot Act by reason of their own business operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xiii so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Non governmental or regulatory bodies typically must use recognized internationa l processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclos ure unless we are legally prohibited from doing so or there is clear indication of illegal conduct in connection with the use of AWS services For more information see the Amazon Information Requests Portal online Privacy and Data Protection in the Philippines: Philippine Data Privacy Laws This part of the paper discusses aspects of the Philippine Data Privacy Laws relating to data protection The Philippine Data Privacy Act o f 2012 20 took effect on September 8 2012 while its Implementing Rules and Regulations 21 came into force on September 9 2016 The Philippine Data Privacy Laws impose requirements for collecting using disclosing transferring and processing personal data The Philippine Data Privacy Laws make a distinction between (a) the personal information “controller ”: the organization which controls the processing of personal data or instructs another to process personal data on its behalf and (b) the personal information “processor ”: an organization to whom the personal information controller outsources or instructs the processing of personal data Typically the data controller should put in place safeguards to ensure that the processing of personal data comp lies with the data protection obligations For example the data controller will need to ensure that safeguards are in place to ensure that the personal data is processed lawfully ; the confidentiality of personal data is protected ; and to prevent its use for unauthorized purposes AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifecycle of personal data included in customer content stored or processed using AWS services For simplicity the guidance in the table below assumes that in the context of customer content stored or processed using AWS services the customer: • Collects personal data from its end users or other individuals (data subjec ts) and determines the purpose for which the customer requires and will use the personal data Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xiv • Has the capacity to control who can access update and use the personal data • Manages the relationship with the individual about whom the personal data relates (referred to as a “data subject ”) including by communicating with the data subject as required to comply with any relevant disclosure and consent requirements As such the customer performs a role similar to that of a data controller as it controls its content and makes decisions about the treatment of that content including who is authorized to process that content on its behalf By comparison AWS performs a role similar to that of a data processor as AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes Where a customer processes personal data using the AWS services on behalf of and according to the directions of a third party (who may be the control ler of the personal data or another third party with whom it has a business relationship) the customer responsibilities referenced in the table will be shared and managed between the customer and that third party We summarize in the table below some Phi lippine Data Privacy Law principles We also discuss aspects of the AWS services relevant to these principles Amazon Web Services Using AWS in t he Context of Philippines Privacy Considerations xv Philippine Data Privacy Law Principle Summary of Data Protection Principle Considerations Collecting Personal Data Only personal data that is necessary and compatible with a declared specific and legitimate purpose may be collected Data subjects should consent to the collection of their personal data unless an exemption applies and be provided with specific information about the purpose and extent to which their personal data will be processed Customer: The customer determines and controls when how and why it collects personal data from data subjects and decides whether it will include that personal data in its customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects that personal data to the relevant data subjects and that it only uses that personal data for a permitted purpose As between the custom er and AWS the customer has a relationship with the data subjects whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about the collection and treatment of their personal data The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal data AWS: AWS does not collect personal data from data subjects whose persona l data is included in content a customer stores or processes using AWS services and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant data subjects Processing Personal Data Personal data should only be processed in a manner (and to the extent necessary) that is compatible with the declared specified and legitimate purpose for which it was collected Customer : The customer determines and controls why it collects p ersonal data what it will be used for who it can be used by and who it is disclosed to The customer must ensure it does so for permitted purposes AWS : AWS only uses customer content to provide the AWS services selected by the customer to that customer and does not use customer content for other purposes except as legally required Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xvi Philippine Data Privacy Law Principle Summary of Data Protection Principle Considerations Accuracy of Personal Data Personal data should be accurate Inaccurate or incomplete data should be rectified supplemented destroyed or have its further processing restricted Customer: When a customer chooses to store or process content containing personal data using AWS services the customer has control over the quality of that content and the customer retains access to and can correct it This means t hat the customer must take all required steps to ensure that the personal data included in the content a customer stores or processes using AWS services is accurate complete not misleading and kept up todate AWS: AWS’s SOC 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing Data Retention and Deletion Personal data should not be retained longer than necessary Personal data should be d isposed or discarded in a secure manner to prevent unauthorized processing access to or disclosure to any third party Customer: Only the customer knows why personal data included in customer content stored on AWS was collected and only the customer kno ws when it is no longer necessary to retain that personal data for legitimate purposes The customer should delete or anonymize the personal data when no longer needed AWS: The AWS services provide the customer with controls to enable the customer to dele te content as described in AWS Documentation Security measures for the protection of personal data Personal information controllers and personal information processors should implement reasonable and appropriate organizational physical and technical security measures for the protection of personal data The security measures should aim to maintain the availability integrity and confidentiality of personal data and protect it against accidental loss or destruction unlawful access fraudulent misuse unlawful destruction alteration and contamination Customer: Customers are responsible for security in the cloud including se curity of their content (and personal data included in their content) If the customer chooses to include personal data in customer content stored using AWS services the customer controls the format and structure of the content and how it is protected fro m disclosure to unauthorized parties including whether it is anonymized or encrypted AWS: AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS cloud inf rastructure and services see Best Practices for Security Identity & Compliance Customers can validate the security controls in place within the AWS environment through AW S certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 and PCI DSS compliance reports Amazon Web Services Using AWS in the Conte xt of Philippines Privacy Considerations xvii Philippine Data Privacy Law Principle Summary of Data Protection Principle Considerations Transferring personal data to third parties Data subjects should consent to their data being shared with a third party and should be informed of the third parties that will be given access to the personal data ; the purpose of data sharing ; the categories of personal data concerned ; the intended categories of recipients of personal data ; and the existence of the rights of data subjects Personal information controllers are respo nsible for any personal data under their control or custody including personal information that is outsourced or transferred to a third party whether domestically or internationally Customer: The customer will know whether it uses the AWS services to stor e or process customer content containing personal data The customer should consider whether it is required to obtain any consents from the relevant data subjects if it decides to transfer their personal data to a third party As between the customer and A WS the customer has a relationship with the data subjects whose personal information is stored by the customer using AWS services and therefore the customer is able to communicate directly with them about such matters The customer is also best placed to inform data subjects that it will use AWS as a service provider if required AWS: AWS has no contact with data subjects whose personal data is included in content stored or processed using AWS services Therefore AWS is not required and is unable in th e circumstances to communicate with the relevant individuals to seek any required consents for any transfers of personal data to third parties Additionally AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or copy or move content between Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General: AWS is ISO 27001 certified and offers robust security features to all customers regardless of the geographical Region in which they store their content Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xviii Philippine Data Privacy Law Principle Summary of Data Protection Principle Considerations Rights of data subjects Data subjects are entitled to be informed and object if their person al data is processed Data subjects should be given reasonable access to their personal data for correction or erasure or blocking Customer: The customer retains control of content stored or processed using AWS services including control over who can a ccess and amend that content In addition as between the customer and AWS the customer has a relationship with the data subjects whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is t herefore able to work with relevant individuals to provide them with information about the processing of their personal data as well as access to and the ability to correct personal data included in customer content AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and AWS has no contact with the data subjects whose personal data is included in content a customer stores or processes using the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal data Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Customers control access keys and determine who is authorized to access their AWS account AWS does not ha ve visibility of access keys or of who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys Where required by applicable law the customer will need t o notify data subjects or a regulator of unauthorized access to or disclosure of their personal data There may be circumstances in which this will be the best approach in order to mitigate risk even if it Amazon Web Services Using AWS in the Context of Ph ilippines Privacy Considerations xix is not mandatory under the applicable law It is for the customer to determine when it is appropriate or necessary for them to notify data subjects and the notification process they will follow Other considerations This whitepaper does not discuss other privacy or data protection laws aside from the Philippine Data Privacy Laws Customers should consider the specific requirements that apply to them including any industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend o n several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their pri vacy regulatory obligations should first ensure they identify and understand the requirements applying to them and seek appropriate advice Additional resources To help customers further understand how they can address their privacy and data protection re quirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at https://awsa mazoncom/compliance and https://awsamazoncom/security Further reading AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at https://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further i nformation on AWS certifications is available at https://awsamazoncom/certification/ Amazon Web Services Using AWS in the Context of Philippines Privacy Considerations xx If you require further information contact AWS at https://aws amazoncom/contact us/ or contact your local AWS account representative Document history Date Description September 30 2021 Reviewed for technical accuracy May 1 2018 Second publication March 1 2018 First publication
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_Singapore_Privacy_Considerations
|
Using AWS in the Context of Singapore Privacy Considerations December 3 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Customer Content: Considerations relevant to privacy and data protection 6 AWS shared responsibility approach to managing cloud security 6 Selecting AWS Global Regions in the AWS Management Console 10 Transfer of personal data cross border 11 Who can access customer content? 12 Customer control over content 12 AWS access to customer content 12 Government rights of access 12 AWS policy on granting government access 13 Privacy and Data Protection in Singapore: The PDPA 14 Privacy Breaches 1 Other consideratio ns 1 Closing Remarks 1 Additional Resources 2 Further Reading 2 Overview This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of key Singapore privacy considerations and the Personal Data Protection Act 2012 (“PDPA”) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations w here customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services This whitepaper focuses on typical questions asked by AWS customers when considering the implications of the PDPA on their use of AWS services to store or process content containing personal data There will also be other relevant considerations for each customer to address A customer may for example need to comply wit h industry specific requirements the laws of other jurisdictions where that customer conducts business or contractual commitments that customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied upon as legal advice As each customer’s requirements will differ AWS strongly encourages customers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other req uirements relevant to their business When referenced in this paper content mean s software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS services A customer’s content can include objects that the customer stores using Amazon Simple Storage Service (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal data relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with Amazon governing the use of AWS services also appl ies to customer content Customer content does not include data that a customer provides to Amazon in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —this is account information and it is governed by the AWS Privacy Notice Customer Content: Considerations relevant to privacy and data pr otection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the conte nt and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they c hoose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has acc ess to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to co ntent that customers choose to store or process using AWS services AWS shared responsibility approach to managing cloud security Will customer content be secure? The answer to that question is particularly important because m oving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtual ization layer down to the physical security of the facilities in which the AWS services operate Customer s are responsible for management of guest operating system s (including updates and security patches to th ose guest operating system s) and associated application software as well as the configuration of AWS provided security group firewall s and other security related features Customer s will generally connect to the ir AWS environment through services the y acquire from third parties (for example inter net service providers) AWS does not provide these connections and they are therefore part of the customer's area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown below: What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solutio n it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to th e security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible for managing the security of the underl ying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing complete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s world class highly secure data centers use state ofthe art electronic surveillance and multi factor access control systems Da ta centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Practices for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports These includ e the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS9 compliance reports The ISO 27018 certification demonstrates that AWS has a system of controls in place that specifica lly address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and repor ts can be requested at https://awsamazoncom/compliance/contact More information on AWS compliance certifications reports and alignment with best practices and standards can be found on the AWS compliance site Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or proce ss using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups loc ations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encr yption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and cont rols including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies enabling Multi Factor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system application s on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Manag ement Service and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and execute security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execute security assessments ac cording to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Accept able Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an “AWS Region” Customers have access to a number of AWS Regions around the world including an Asia Pacific (Singapore) Region Customers can choose to use one Region all Regions or any combination of AWS Regio ns The map below shows AWS Region locations as at September 2021 The AWS Cloud spans 81 Availability Zones within 25 geographic regions around the world with announced plans for 24 more Availability Zones and 8 more AWS Regions in Australia India Indo nesia Israel New Zealand Spain Switzerland and United Arab Emirates (UAE) AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific re gulatory and compliance requirements AWS China (Beijing) is also an isolated AWS Region Customers who wish to use the AWS China (Beijing) Region are required to sign up for a separate set of account credentials unique to the China (Beijing) Region AWS c ustomers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in S ingapore can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Singapore) Region and store their content onshore in Singapore if this is their preferred location If the customer makes this choice AWS will not mo ve their content from Singapore without the customer’s consent except as legally required Selecting AWS Global Regions in the AWS Management Console The AWS Management Console gives customer s secure login using their AWS or IAM a ccount credentials When using the AWS management console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services The figure below provides an example of the AWS Reg ion selection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS management console Any compute and other resources launched by the customer will be located in the AWS Region des ignated by the customer For example when customer choose s the Asia Pacific (Singapore) Region for its compute resources such as Amazon EC2 or AWS Lambda launched in that environment would only reside in the Asia Pacific (Singapore) Region This option can also be leveraged for other AWS Regions Transfer of personal data cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provides customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS’ adherence to the CISPE code of conduct granular data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS’ C5 attestations For additional information please visit the AWS General Data Protection Regulation (GDPR) Center and see our Navigating GDPR Complian ce on AWS guidance When using AWS services customers may choose to transfer content containing personal data cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that i ncludes the Standard Contractual Clauses 2010/87/EU (often referred to as “Model Clauses”) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area such as Singapore With our EU Data Processing Addendum and Model Clauses AWS customers —whether established in Europe or a global company operating in the European Economic Area —can continue to run their global operations using AW S in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment They can: • Determine where their content will be located for example the type of storage they use on AWS and the ge ographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content i n transit or at rest and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage identity and access management controls to their content such as by using AWS Identity and Access Management (IAM) and by setting appropriate permissions and security credentials to access their AWS environment and content This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as describ ed on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own encryption keys or using a third party encryption mech anism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Governme nt rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances g overnments may have access to their content The local laws that apply in the jurisdiction where the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may appl y to them Customers should seek advice to understand the application of relevant laws to their business and operations When concerns or questions are raised about the rights of domestic or foreign governments to seek access to content stored in the cloud it is important to understand that relevant government bodies may have rights to issue requests for such content under laws that already apply to the customer For example a company doing business in Country X could be subject to a legal request for in formation even if the content is stored in Country Y Typically a government agency seeking access to the data of an entity will address any request for information directly to that entity rather than to the cloud provider Singapore like most countries has legislation that enables Singapore’s law enforcement and government security bodies to seek access to information Singapore has processes (including Mutual Legal Assistance Treaties) to enable the transfer of information to other countries in respons e to appropriate legal requests for information (eg relating to criminal acts) However it is important to remember that the relevant laws contain criteria that must be satisfied before authorizing in order for the relevant law enforcement body to make a valid request For example the government agency seeking access will need to show it has a valid reason for requiring a party to provide access to content and may need to obtain a court order or warrant Many countries have data access laws which purp ort to apply extraterritorially An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is similar to laws in other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law in cluding for example that the request is related to legitimate investigations The Patriot Act generally applies to all companies with an operation in the US irrespective of where they are incorporated and/or operating globally and irrespective of whet her the information is stored in the cloud in an on site data center or in physical records This means that companies headquartered or operating outside the United States which also do business in the United States may find they are subject to the Patri ot Act by reason of their own business operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to d o so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise required by applicable law Non governmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear indication of illegal conduct in connection with the use of AWS services For additional information please visit the Amazon Informat ion Requests Portal Privacy and Data Protection in Singapore: The PDPA This part of the paper discusses aspects of the PDPA relating to data protection The data protection principles under the PDPA impose requirements for collecting using disclosing t ransferring and processing personal data The PDPA makes a distinction between the organization that processes or controls/authorizes the processing of personal data and a “data intermediary” who processes personal data on behalf of another organization A data intermediary when it processes personal data for another organization has more limited obligations under the data protection principles These arise under the Protection Obligation and Retention Limitation Obligation AWS appreciates that its serv ices are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifecycle of personal data included in customer content stored or processed using AWS services For simplicity the guida nce in the table below assumes that in the context of customer content stored or processed using AWS services the customer: • Collects personal data from its end users or other individuals and determines the purpose for which the customer requires and wil l use the personal data • Has the capacity to control who can access update and use the personal data • Manages the relationship with the individual about whom the personal data relates including by communicating with the individual as required to comply wit h any relevant disclosure and consent requirements Customers may in fact work with (or rely on) third parties to discharge these responsibilities but the customer rather than AWS would manage its relationships with third parties We summarize the data protection principles of the PDPA in the table below We also discuss aspects of the AWS services relevant to these requirements Data Protection Principle Summary of Dat a Protection Obligations Considerations Notification and consent obligation Individuals should be notified in advance of the purposes for which their personal data will be collected used and disclosed Personal data may only be collected used or disclosed for the purpose for which the individual has given his/her consent Custome r: The customer determines and controls when how and why it collects personal data from individuals and decides whether it will include that personal data in customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects that data to the relevant individuals; obtains the data from a permitted source; and that it only uses the data for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about collection and treatment of their personal data The customer rather than AWS will also know the scope of any notific ations given to or consents obtained by the customer from such individuals relating to the collection use or disclosure of their personal data The customer will know whether it uses AWS services to store or process customer content containing personal data and therefore is best placed to inform individuals that it will use AWS as a service provider if required AWS: AWS does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only uses customer content to provide and maintain the AWS services the customer selects and does not use customer content for any other purposes Data Protection Principle Summary of Dat a Protection Obligations Considerations Purpose limitation obligation Personal data may only be collected used or disclosed for reasonable purposes Customer: The customer determines and controls why it collects personal data what it will be used for who it can be used by and who it is disclosed to The customer should ensure it only does so for permitted purposes If the customer chooses to include personal data in customer content stored in AWS the customer controls the format and structure of its content and how it is protected from disclosure to unauthorized parties including whether it is anonymized or encrypted AWS: AWS only uses customer content to provide and maintain the AWS services the customer selects and does not use customer content for any other purposes Access and correction obligation Individuals should be able to access and correct their personal data and find out how it has been used and to whom it has been disclosed in the past year Customer: The customer retains control of content stored or processed using AWS services including control over how that content is secured and who can access and amend that content In addition as between the customer and AWS the customer has a relationship with the individu als whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and the ability to correct personal data included in customer content AWS: AWS only uses customer content to provide and maintain AWS services customer selects and does not use customer content for any other purposes AWS has no contact with the individuals whose personal data is included in content a customer stores or processes using the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal data Data Protection Principle Summary of Dat a Protection Obligations Considerations Accuracy obligation An organization should take all reasonable steps to ensure that personal data is accurate and complete if the personal data is likely to be used to make a decision that affect s the individual or is disclosed to another organization Customer: When a customer chooses to store or process content containing personal data using AWS services the customer has control over the quality of that content and the customer retains access to and can correct it This means that the customer should take all required steps to ensure that personal data included in customer content is accurate complete not misleading and kept upto date AWS : AWS’s SOC 1 Type 2 report includes control objectives that provide reasonable assurance that data integrity is maintained through all phases of the services including transmission storage and processing Protection obligation Organizations should protect personal data from unauthorized access use disclosure modification or disposal by implementing reasonable security arrangements Customer: Customers are responsible for security in the cloud including security of their content (and personal data included in their content ) AWS: AWS is responsible for managing the security of the underlying cloud environment Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS compliance reports Retention limitation obligation Personal data should not be kept longer than necessary for the fulfilment of the purpose for which the personal data was collected or retained when it no longer necessary for legal or business purposes Customer: Only the customer knows why personal data included in customer content stored or processing using AWS services was collected and only the customer knows when it is required to retain that personal data for its legal or business purposes The customer should delete or anonymize the personal data when no longer needed AWS: The AWS services provide the customer with controls to enable the customer to delete content as described in the AWS Documentation Data Protection Principle Summary of Dat a Protection Obligations Considerations Transfer limitation obligation Organizations may only transfer personal data to recipients outside Singapore where the recipient is bound by legally enforceable obligations to protect the personal data in accordance with a standard comparable to the PDPA Customer: The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what Region they use for their content The customer should disclose to individuals the locations in which it stores or processes their personal data and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores or processes using AWS services and therefore the customer is able to communicate directly with them about such matters AWS: AWS only stores and processes each customer’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to s tore content in more than one Region or copy or move content between Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General: AWS is ISO 27001 certified and offers robust security features to all customers regardless of the geographical Region in which they store their content Data Protection Principle Summary of Dat a Protection Obligations Considerations Openness obligation Organizations should designate a data protection officer implement policies and procedures to meet the PDPA obligations and make such policies and procedures publicly available Customer: The customer determines and controls when how and why it collects personal data from individuals and whether it will include that personal data in the content the customer stores or processes using AWS services As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores or processes using AWS services The customer is therefore responsible for ensuring that the individuals from whom it collects personal data are aware of the customers’ data protection policies and procedures and that its policies and procedures meet the requ irements of the PDPA AWS: AWS does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS services and AWS has no contact with those individuals Therefore AWS cannot address in its policies how each customer chooses to treat personal data included in the content a customer stores or processes using AWS services Amazon Web Services Using AWS in the Context of Singapore Privacy Considerations 1 Privacy Breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility Customers are in the best position to manage this responsibility because they maintain and manage the necessary security and access controls to their accounts and content AWS customers have access to an advanced set of access encryption and logging feat ures to help customer protect their content effectively (eg AWS Identity and Access Management (IAM) AWS Organizations and AWS CloudTrail) Customers can use IAM to create and manage AWS users and groups and use permissions to allow and deny their acce ss to customer content AWS CloudTrail enables customer to monitor and record account activity across their environment control over storage analysis and remediation actions In Sin gapore ‘organizations’ ie customers (but not data intermediaries such as AWS) are required to notify the Personal Data Protection Commission and affected individuals of unauthorized access to or disclosure of personal data that results in (or is likely t o result in) significant harm to individuals or is of a significant scale Additionally there are circumstances in which notifying individuals will be the best approach in order to mitigate risk even though it is not mandatory under the applicable law It is for the customer to determine when it is appropriate or necessary for them to notify individuals and the notification process they will follow Other considerations This whitepaper does not discuss other privacy or data protection laws aside from t he PDPA Customers should consider the specific requirements that apply to them including any industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their privacy regulatory obligations should first ensure they identify and understand the requirements applying to them and seek appropriate advice Closing Remarks At AWS security is always our top priority We deliver services to millions of active customers including enterprises educational institutions and government agencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information Amazon Web Services Using AWS in the Context of Singapore Privacy Considerations 2 AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Additional Resources To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklis ts and guidance published on the AWS website This material can be found at http://awsamazoncom/compliance and http://awsamazoncom/security As of the date of this document specific whitepapers about privacy and data protection considerations are also available for the following countries or regions: Germany Australia Hong Kong Japan Malaysia New Zealand Philippines Further Reading AWS also offers training to help customers learn how to design d evelop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at: http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further information on AWS certifications is available at: http://awsamazoncom/certification/ If you require further information please contact AWS at: https://awsamazoncom/contact us/ or contact your l ocal AWS account representative Amazon Web Services Using AWS in the Context of Singapore Pr ivacy Considerations 3 Document Revisions Date Description September 2014 First publication January 2016 Second publication March 2018 Third publication May 2018 Fourth Publication December 2021 Fifth Publicatio n
|
General
|
consultant
|
Best Practices
|
Using_AWS_in_the_Context_of_UK_Healthcare_IG_SoC_Process
|
ArchivedUsing AWS in the context of UK Healthcare IG SoC process May 2016 This paper has been archived For the latest technical guidance see https://awsamazoncom/compliance/ programs/ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 2 of 24 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 3 of 24 Table of Contents Abstract 4 Introduction 4 Government Security Classifications in context of UK Healthcare workloads 5 Cloud Security Principles and IG SoC 5 GCloud framework and GOVUK Digital Marketplace 5 Shared Responsibility Environment 6 IG Toolkit requirements for a Commercial Third Party Version 13 7 Information Governance Management 8 Confidentiality and Data Protection 10 Information Security 14 Healthcare Reference Architecture 21 Architecture Overview 21 AWS Security Implementation 22 Identity and Access Management 22 Protecting Data at Rest 22 Protecting Data in Transit 22 Amazon Virtual Private Cloud (VPC) 23 Elastic Load Balancing 23 Conclusion 23 Additional Resources 24 Document Revisions 24 ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 4 of 24 Abstract This whitepaper is intended to assist organisations using Amazon Web Services (AWS) for United Kingdom (UK) National Health Service (NHS) workloads UK’s Department of Health sponsors the Health and Social Care Information Centre (HSCIC) to provide information data and IT systems for commissioners analysts and clinicians in health and social care As part of this role HSCIC publishes guidance and requirements on Information Governance (IG) IG Statement of Compliance (IG SoC) is a process by which organisations enter into an agreement with HSCIC for access to HSCIC’s services including the NHS National Network (N3) in order to preserve the integrity of those services Currently AWS does not directly access services provided by HSCIC including the NHS N3 However AWS Partners or customers may have or require access to HSCIC services and hence require them to comply with the IG SoC process This document aims to help the reader understand: The role that the customer and/or partner and AWS play in ownership management and security of the content stored on AWS A reference architecture that demonstrates shared responsibility model to meet IG SoC requirements How AWS aligns with each of the 17 requirements for a Commercial Third Party within HSCIC’s IG Toolkit requirements Introduction All organisations that wish to use HSCIC services including the N3 network must complete the IG SoC process The IG SoC process set out a range of security related requirements that must be satisfied in order for an organisation to provide assurances with respect to safeguarding the N3 network and information assets that may be accessed The IG Toolkit is part of the IG SoC process in that organisations must carry out an annual assessment evidence their compliance with the requirements and accept the IG Assurance Statement which confirms the organisation’s commitment to meeting and maintaining the required standards of information governance For organisations that need to complete the IG SoC process a 3step process must be followed as described on the ‘IG SoC for Non NHS Organisations’ website Key steps of this process are described below: Step 1 Complete and submit the application form which includes details of an NHS sponsor Additional documentation: Logical Connection Architecture (only if you are connecting DIRECTLY to N3) Offshoring policy and ISMS document Step 2 Review the IG Toolkit assessment for the organisationtype Complete and publish the IG Toolkit assessment annually ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 5 of 24 Step 3 ‘Authority to Proceed’ notification provided through British Telecom (BT) N3 team BT N3 team will contact applicant to proceed Government Security Classifications in context of UK Healthcare workloads Under the UK Government Security Classifications HM Government information assets can be classified into three types: OFFICIAL SECRET and TOP SECRET Each classification attracts a baseline set of security controls providing appropriate protection against typical threats AWS customers and partners will be required to follow the HSCIC guidance when managing information assets which may or may not include patient data HSCIC offers guidance on looking after information according to the principles of good Information Governance Cloud Security Principles and IG SoC For UK government organisations to use cloud services for OFFICIALmarked systems CESG Cloud Security Guidance includes a risk management approach to using cloud services a summary of the Cloud Security Principles and guidance on implementation of the Cloud Security Principles Our Cloud Security Principles whitepaper provides guidance on how AWS aligns with Cloud Security Principles and the objectives of the principles as part of CESG’s Cloud Security Guidance For our customers and partners using AWS for UK healthcare information assets marked as OFFICIAL we have mapped each IG SoC requirement with the appropriate Cloud Security Principle in this whitepaper For architectures managing OFFICIALmarked information assets and for more information on using AWS in the context of Cloud Security Principles we recommend referring to our Cloud Security Principles whitepaper GCloud framework and GOV UK Digital Marketplace The GCloud framework is a compliant route to market for UK public sector organisations to source commoditised cloudbased IT services on a direct award basis The framework supports a more time and cost effective procurement process for buyers and suppliers The UK Digital Marketplace lists related security questions based on the Cloud Security Principles and responses for 12 AWS services These services are listed below with links to service description and digital marketplace: 1 Amazon Elastic Compute Cloud (Amazon EC2) Digital Marketplace link 2 Auto Scaling Digital Marketplace link 3 Elastic Load Balancing Digital Marketplace link 4 Amazon Virtual Private Cloud (Amazon VPC) Digital Marketplace link 5 AWS Direct Connect Digital Marketplace link 6 Amazon Simple Storage Service (Amazon S3) Digital Marketplace link 7 Amazon Glacier Digital Marketplace link ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 6 of 24 8 Amazon Elastic Block Store (Amazon EBS) Digital Marketplace link 9 Amazon Relational Database Service (Amazon RDS) Digital Marketplace link 10 AWS Identity and Access Management (IAM) Digital Marketplace link 11 Amazon CloudWatch Digital Marketplace link 12 AWS Enterprise Support Digital Marketplace link Shared Responsibility Environment When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: What content they choose to store on AWS Which AWS services are used with the content In what country that content is stored The format and structure of that content and whether it is masked anonymised or encrypted Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility ” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Under the shared responsibility model AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of the ir operating system (including updates and security patches) other associated application software as well as the configuration of the AWSprovided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as hostbased firewalls hostbased intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate that controls are operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 7 of 24 IG Toolkit requirements for a Commercial Third Party Version 13 IG Toolkit is a Department of Health (DH) policy delivery vehicle that the HSCIC develops and maintains It combines the legal rules and central guidance set out by DH policy and presents them i n a single standard of information governance requirements The organisations in scope of this process are required to carry out selfassessments of their compliance against the IG requirements For Commercial Third Party organisations the IG Toolkit lists 17 requirements that these organisations must assess within three requirement initiatives – Information Governance Management Confidentiality and Data Protection Assurance and Information Security Assurance Details on the 17 requirements from the IG Toolkit and how AWS aligns with these requirements with the related assurance approach are described below with two notes: AWS customers and partners providing services to HSCIC should meet and maintain each individual requirement described below using their designated IG responsible staff under the Shared Responsibility Model The use of AWS and the AWS approach described below does not satisfy their responsibilities for the requirement in its entirety IG Toolkit requirements and the IG SoC process are subject to revision AWS will attempt to update the guidance in this document to reflect these changes in due course following the revision but customers should review the HSCIC guidance to confirm applicability ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 8 of 24 Information Governance Management Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 114 Requirement Details Responsibility for Information Governance has been assigned to an appropriate member or members of staff It is important that there is a consistent approach to information handling within the organisation which is in line with the law central policy contractual terms and conditions and best practice guidance This requires one or more members of staff to be assigned clear responsibility for driving any required improvements Customers building systems connecting to HSCIC services or N3 network are required to assign Information Governance responsibility to an appropriate member or members of staff AWS has an established information security organization man aged by the AWS Security team and is led by the AWS Chief Information Security Officer (CISO) AWS Security establishes and maintains formal policies and procedures to delineate the minimum standards for logical access on the AWS platform and infrastructur e hosts The policies also identify functional responsibilities for the administration of logical access and security The implementation of this requirement is validated independently in ISO 27001 PCIDSS and SOC certifications Principle 4: Governance Framework Requirement 13 115 Requirement Details There is an information governance policy that addresses the overall requirements of information governance There is a need to ensure that everyone working for or on behalf of the organisation (including temps volunteers locums and students) is aware of the org anisation’s overall approach to IG and where underpinning procedures and processes can be found This can be achieved by developing an Information Governance policy Information security and governance policies are approved and communicated across AWS to ensure the implementation of appropriate security measures across the environment The implementation of this requirement is validated independently in ISO 27001 PCI DSS and SOC certifications Principle 4: Governance Framework ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 9 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 116 Requirement Details All contracts (staff contractor and third party) contain clauses that clearly identify information governance responsibilities One of the ways in which an organisation can ensure it fulfills its legal and other responsibilities regarding confidential information is to ensure that all staff members (including temps locums students and volunteers) are fully infor med of their own obligations to comply with information governance requirements All personnel supporting AWS systems and devices must sign a non disclosure agreement prior to being granted access Additionally upon hire personnel are required to read an d accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy Principle 6: Personnel Security Requirement 13 117 Requirement Details All staff members are provided with appropriate training on informat ion governance requirements To maintain information handling standards in the organisation staff should be provided with appropriate training on information governance AWS customers and partners providing services to HSCIC should meet and maintain this staff training requirement using their designated IG responsible staff under the Shared Responsibility Model All personnel supporting AWS systems and devices must sign a non disclosure agreement prior to being granted access Additionally upon hire personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy AWS maintains employee training programs to promote awareness of AWS information security requirements Principle 6: Personnel Security ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 10 of 24 Confidentiality and Data Protection Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 202 Requirement Details Confidential personal information is only shared and used in a lawful manner and objections to the disclosure or use of this information are appropriately respected The Data Protection Act 1998 provides conditions that must be met when processing personal information In addit ion where personal information is held in confidence (eg details of care and treatment) the common law requires the consent of the individual concerned or some other legal basis before it is used and shared Staff must be made aware of the right of an individual to restr ict how confidential personal information is disclosed and the processes that they need to follow to ensure this right is respected AWS does not access any customer’s content except as necessary to provide that customer with the AWS services it has selected AWS does not access customers’ content for any other purposes AWS does not know what content customers choose to store on AWS and cannot distinguish between personal data and other content so AWS treats all customer content the same (Source: EU Data Protection Whitepaper ) The Standard Contractual Clauses (also known as "model clauses") are a set of standard provisions defined a nd approved by the European Commission that can be used to enable personal data to be transferred in a compliant way by a data controller to a data processor outside the European Economic Area The Article 29 Working Party has approved the AWS Data Process ing Agreement which includes the Model Clauses The Art icle 29 Working Party has found that the AWS Data Processing Agreement meets the requirements of the Directive with respect to Model Clauses This means that the AWS Data Processing Agreement is not co nsidered “ad hoc” In addition to this alignment with ISO 27018 demonstrates to customers that AWS has a system of controls in place that specifically address the privacy protection of the ir content AWS' alignment with and independent third party assessment of this internationally recognized code of practice demonstrates AWS' commitment to the privacy and protection of customers' content Further information can be found at: https://awsamazoncom/compliance/eu data protection/ https://awsamazoncom/compliance/iso 27018 faqs/ https://awsamazoncom/compliance/amazon information requests/ Principle 9: Secure consumer management ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 11 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 206 Requirement Details Staff access to confidential personal information is monitored and audited Where care records are held electronically audit trail details about access to a record can be made available t o the individual concerned on request Organisations should ensure that access to confidential personal information is monitored and audited locally and in particular ensure that there are agreed procedures for investigating confidentiality events Organi sations should ensure that access to confidential personal information is monitored and audited locally and in particular ensure that there are agreed procedures for investigating confidentiality events AWS customers and partners looking to access and protect confidential personal information have a great deal of flexibility in how they meet the data protection requirements AWS CloudTrail is a service that provides audit records for AWS customers and delivers audit information in the form of log files to a specified storage bucket The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service CloudTrail pro vides a history of AWS API calls for customer accounts including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as AWS CloudFormation) The AWS API call history produced by CloudTrail enabl es security analysis resource change tracking and compliance auditing The log file objects written to S3 are granted full control to the bucket owner The bucket owner thus has full control over whether to s hare the logs with anyone else This feature provides confidence and enables AWS customers to meet their needs for investigating service misuse or incidents More details on AWS CloudTrail and further information on audit records can be requested at http://awsamazoncom/cloudtrail A latest version of CloudTrail User Guide is available at : http://docs awsamazoncom/awscloudtrail/latest/ userguide/cloudtrail user guidehtml Principle 13: Audit information provision to consumers ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 12 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 209 Requirement Details All person identifiable data processed outside of the UK complies with the Data Protection Act 1998 and Department of Health guidelines Organisations are responsible for the security and confidentiality of personal information they process Processing may include the transfer of that information to countries outside of the UK and where person identifiable information is transferred organisations must comply with both the Data Protection Act 1998 and the De partment of Health guidelines AWS customers and partners providing services to HSCIC should meet and maintain compliance with Data Protection Act 1998 and Department of Health guidelines using their designated IG responsible staff under the Shared Respons ibility Model AWS customers and partners are in control of which AWS Region their data is stored For compliance guidance on Data Protection Act and the EU Directive we recommend our EU Data Protection Whitepaper that describes the various considerations and obligations against the data protection principles Principle 9: Secure consumer management Requirement 13 210 Requirement Details All new processes services information systems and other relevant information assets are developed and implemented in a secure and structured manner and comply with IG security accreditation information quality and confidentiality and data protection requirem ents Organisations should ensure that when new processes services systems and other information assets are introduced that the implementation does not result in an adverse impact on information quality or a breach of information security confidentialit y or data protection requirements For best effect requirements to ensure information security confidentiality and data protection and information quality should be identified and agreed prior to the design development and/or implementation of a new pro cess or system AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud Protecting this infrastructure is AWS’s number one priority AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Adva nce approval for these types of scans can be initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form AWS’ development process follows secure software development best practices which include formal design reviews b y the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Principle 9: Secure consumer management ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 13 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 211 Requirement Details All transfers of personal and sensitive information are conducted in a secure and confidential manner There is a need to ensure that all transfers of personal and sensi tive information (correspondence faxes email telephone messages transfer of patient records and other communications containing personal or sensitive information) are conducted in a secure and confidential manner This is to ensure that information is not disclosed inappropriately either by accident or design whilst it is being transferred o r communicated to within or outside of the organisation AWS customers and partners looking to access and protect confidential personal information have a great d eal of flexibility in how they meet the data protection requirements Customers have a number of options to encrypt their content when using the services including using AWS encryption features managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content for any purpose other than as legally required and to provide the AWS services selected by each customer to that customer and its end users AWS never uses customer cont ent or derives information from it for other purposes such as marketing or advertising AWS offers a comprehensive set of data protection and confidentiality features and services using key management and encryption easy to manage and simpler to audit including the AWS Key Management Service (AWS KMS) More details on AWS KMS and further information can be requested at http://awsamazoncom/kms A latest version of KMS Developer Guide is available at http://docsawsamazoncom/kms/latest/developerguide/o verviewhtml Principle 13: Audit information provision to consumers ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 14 of 24 Information Security Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 305 Requirement Details Operating and application information systems (under the organisation’s control) support appropriate acce ss control functionality and documented and managed access rights are in place for all users of these systems Organisations should control access to Information Assets and systems by ensuring that system functionality is configured to support user access controls and by further ensuring that formal procedures are in place to control the allocation of access rights to local information systems and services These procedures should cover all stages in the life cycle of user access from the init ial registration of new users to the final de registration of users who no longer require access to information systems and services Special attention should be given to managing access rights which allow support staff to override system controls AWS cus tomers and partners providing services to HSCIC should support appropriate access control functionality using their designated IG responsible staff under the Shared Responsibility Model AWS Identity and Access Management (IAM) provides customers with con trols and features to provide confidence that authenticated and authorised users have access to specified services and interfaces AWS IAM allows the creation of multiple users and the ability to manage the permissions for each of these users within your A WS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS AWS IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropr iate AWS IAM enables implementation of security best practices such as least privileged by granting unique credentials to every user within an AWS Account and only granting permission to access the AWS services and resources required for the users to pe rform their jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted Principle 91: Authentication of consumers to management interfaces and within support channels ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 15 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 313 Requirement Details Policy and procedures are in place to ensure that Information Communication Technology (ICT) networks operate securely The objective of this requirement is to ensure there is appropriate protection for systems hosted and information communicated over local networks and for the protection of the supporting infrastructure components (including wireless networks) AWS custom ers and partners providing services to HSCIC should implement policies and procedures to operate the ICT networks securely using their designated IG responsible staff under the Shared Responsibility Model AWS uses various technologies to enable data in transit protection between the consumer and a service within each service and between the services Cloud infrastructure and applications often communicate over public links such as the Internet so it is important to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers The AWS network provides protection against network attacks Procedures and mechanisms are in place to appropriately rest rict unauthorized internal and external access to data and access to customer data is appropriately segregated from other customers Principle 1: Data in Transit Protection ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 16 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 314 Requirement Details Policy and procedures ensure that mobile computing and teleworking are secure Mobile computing and teleworking pose a substantial risk For example devices may be lost d amaged or stolen potentially resulting in the loss or inappropriate disclosure of data The information security protection measures required should be commensurate with the risks presented by these working arrangements Helping to protect the confidenti ality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence AWS uses techniques described in industry accepted standards to ensure that data is erased when res ources are moved or re provisioned when they leave the service or when you request it to be erased When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data f rom being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practices Principle 2: Asset Protection and Resilience ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 17 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 316 Requirement Details There is an information asset register that includes all key information software hardware and services The objective is to account for information assets containing patient/service user information to ensure that in the event of damage destruction or loss there is awareness of what information is affected and in the case of loss whether the information held on the asset is protected from unautho rised access AWS applies a systematic approach to managing change so that changes to customer impacting services are reviewed tested approved and well communicated Change management (CM) processes are based on Amazon change management guidelines and ta ilored to the specifics of each AWS service These processes are documented and communicated to the necessary personnel by service team management The goal of AWS’ change management process is to prevent unintended service disruptions and maintain the in tegrity of service to the customer Change details are documented in Amazon’s CM workflow tool or another change management or deployment tool Principle 5: Operational Security Requirement 13 317 Requirement Details Unauthorised access to the premises equipment records and other assets is prevented It is important to ensure that the organisation’s assets premises equipment records and other assets including staff are protected by physical security measures AWS customers and partners providing ser vices to HSCIC should implement controls to prevent unauthorized access to premises equipment records and other assets using their designated IG responsible staff under the Shared Responsibility Model Amazon has significant experience in securing designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS provides data center physical access to approved employees and contractors who have a legitimate business need for such privileges All individuals are required to present identification and are signed in Visitors are escorted by authorised staff When an employee or contractor no longer requires these privileges his or her access is promptly revoked even if he or she continues to be an employee of Amazon or AWS In addition access is automatically revoked when an employee’s record is terminated in Amazon’s HR system Principle 2: Asset Protection and Resilience ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 18 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 319 Requirement Details There are documented plans and procedures to support business continuity in the event of power failures system failures natural disasters and other disruptions In the event of a security failure or a disaster natural accidental or deliberate vital business processes still need to be carried out Having documented business continuity plans and procedures assists this process enabling all staff to know what they need to do in the event of a security failure or disaster The AWS Resiliency program encompasses the processes and procedu res by which AWS identifies responds to and recovers from a major event or incident within our environment This program aims to provide you sufficient confidence that your business needs for availability commitment of the service including the ability to recover from outages are met This program builds upon the tradi tional approach of addressing contingency management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Availability Zones (AZs) and continuous infrastructure capacity planning AWS contingency plans and incident response playbooks are maintained and updated to reflect emerging continuity risks and lessons learned from past incidents Plans are tested and updated through the due course of business (at least monthly) and the AWS Resiliency plan is reviewed and approved by senior leadership annually Principle 2: Asset Protection and Resilienc e ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 19 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 320 Requirement Details There are documented incident management and reporting procedures Information incidents include a loss/breach of staff/patient/service user personal data a breach of confidentiality or other effect on the confidentiality information security or quality of staff/patient/service user information All incidents and near misses should be reported recorded and appropriately managed so that where incidents do occur the damage from them is minimised and lessons are learnt from them An Information Governance Serious Incident Requiring Investigation (IG SIRI) deemed reporta ble to national bodies eg the Information Commissioner should be recorded and communicated via the IG Toolkit Incident Reporting Tool AWS customers and partners providing services to HSCIC should implement documented incident management and reporting procedures using their designated IG responsible staff under the Shared Responsibility Model AWS has implemented a formal documented incident response policy and program The policy addresses purpose scope roles responsibilities and management commit ment AWS utilizes a three phased approach to manage incidents: 1 Activation and Notification Phase 2 Recovery Phase 3 Reconstitution Phase In addition to the internal communication mechanisms detailed above AWS has also implemented various methods o f external communication to support its customer base and community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard" is available and maintain ed by the customer support t eam to alert customers to any issues that may be of broad impact Principle 5: Operational Security ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 20 of 24 Requirement Requirement Description Customer responsibility and AWS approach Cloud Security Principle mapping Requirement 13 323 Requirement Details All information assets that hold or are personal data are protected by appropriate organisational and technical measures Organisations must ensure that all of their information assets that hold or are personal dat a are protected by technical and organisational measures appropriate to the nature of the asset and the sensitivity of the data AWS customers and partners providing services to HSCIC should implement appropriate organizational and technical measures to pr otect information assets that hold or are personal data using their designated IG responsible staff under the Shared Responsibility Model AWS does not access any customer’s content except as necessary to provide that customer with the AWS services it has selected AWS does not access customers’ content for any other purposes AWS does not know what content customers choose to store on AWS and cannot distinguish between personal data and other content so AWS treats all customer content the same Alignment with ISO 27018 demonstrates to customers that AWS has a system of controls in place that specifically address the privacy protection of their content AWS' alignment with and independent third party assessment of this internationally recognized c ode of practice demonstrates AWS' commitment to the privacy and protection of customers' content Further information can be found at: https://awsamazoncom/compliance/eu data protecti on/ https://awsamazoncom/compliance/iso 27018 faqs/ Principle 5: Operational Security Principle 9: Secure consumer management ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 21 of 24 Healthcare Reference Architecture In order to help customers meet the objectives of the HSCIC IG SoC requirements AWS has provided a sample architecture diagram (Figure 2 Sample Reference Architecture) along with recommended AWS Security controls for various healthcare workloads The sample architecture diagram has been provided for illustrative purposes only and will be referenced throughout this section of the document Figure 2 – Sample Reference Architecture Architecture Overview The sample reference architecture diagram shows two threetier web applications each isolated within their own AWS Virtual Private Cloud (VPC) This architecture also includes a Management VPC where management and monitoring services will be hosted This may include services such as bastion hosts for administration configuration management tools or patching and SIEM services Each VPC hosts only private subnets and no access is available from the public Internet There is an AWS Direct Connect in place connecting the customer site to the AWS VPC’s of their choosing via a dedicated line This ensures that all application traffic is sent over a private network SSL/TLS is recommended to encrypt data in transit when accessing these applications Optionally you could also host a client side VPN service within the Management VPC for access to administrative systems ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 22 of 24 Each Application VPC is isolated from others This allows you to run multiple versions of an application at different deployment stages whilst maintaining complete network isolation For example you could host a Development environment in one VPC and production in another VPC Peering connections are in place between the management VPC and the application VPC’s with routes and rules in place to ensure only management traffic is allowed Audit Logs Amazon Machine Images Snapshots and static assets can be stored in Amazon S3 buckets for highly durable object storage We access these buckets using VPC Endpoints for S3 which allow you to communicate with those S3 buckets both over a private connection and on ly from the VPC’s that you specify AWS Security Implementation Identity and Access Management AWS Identity and Access Management (IAM) is a web service that allows you to centrally manage users security credentials such as access keys and permissions that control which AWS resources users can access IAM provides users with granular permissions to allow different people to have access to different AWS resources Multifactor authentication (MFA) is recommended and can be added to your account and to individual users for additional security You can also leverage identity federation if required to enable users who already have passwords elsewhere for example in your corporate network to gain temporary access to your AWS account Customers should use IAM Roles for Amazon EC2 when accessing other AWS services such as S3 from applications running on Amazon EC2 IAM Roles for EC2 allow you to assign permissions to an EC2 instance instead of a specific user This role is assigned to an EC2 instance and applications running on that instance that leverage AWS SDK’s can securely access other AWS resour ces such as S3 buckets without have to share API keys Protecting Data at Rest AWS Key Management Service (KMS) provides a simple web services interface that can be used to generate and manage cryptographic keys and operate as a cryptographic service provider for protecting data AWS KMS offers traditional key management services integrated with other AWS services providing a consistent view of customers’ keys across AWS with centralized management and auditing Master keys in AWS KMS can be used to encrypt/decrypt data encryption keys used to encrypt data in customer applications or in AWS services that are integrated with AWS KMS For more information on KMS visit: https://awsamazoncom/kms/ AWS services such as Amazon S3 AWS Elastic Block Store (EBS) and Amazon Relational Database Service (RDS) shown in Fig 2 above allow customers to encrypt data using keys that customers manage through AWS KMS Protecting Data in Transit Network traffic must encrypt data in transit For traffic between external sources and Amazon EC2 customers should use industrystandard transport encryption mechanisms such as TLS or IPsec ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 23 of 24 virtual private networks (VPNs) Internal to an Amazon Virtual Private Cloud (VPC) for data travelling between EC2 instances network traffic must also be encrypted; most applications support TLS or other protocols providing in transit encryption that can be configured For applications and protocols that do not support encryption sessions transmitting patient data can be sent through encrypted tunnels using IPsec or similar Amazon Virtual Private Cloud (VPC) Amazon Virtual Private Cloud offers a set of network security features well aligned to architecting for IG SoC compliance Features such as stateless network access control lists and dynamic reassignment of instances into stateful security groups afford flexibility in protecting the instances from unauthorized network access Amazon VPC also allows customers to extend their own network address space into AWS Customers are also able to connect their data centers to AWS via a Virtual Private Network (VPN) or using Amazon Direct Connect to provide a dedicated connection as shown in Fig 2 earlier VPC Flow logs provide an audit trail of accepted and rejected connections to instances processing transmitting or storing patient information For more information on VPC see https://awsamazoncom/vpc/ Elastic Load Balancing To ensure that data is encrypted in transit end toend customers can implement any of two different architectures when using Amazon Elastic Load Balancing (ELB) Customers can terminate HTTPS or SSL/TLS on ELB by creating a load balancer that uses an encrypted protocol for connections This f eature enables traffic encryption between the customer’s local balancer and the clients that initiate HTTPS or SSL/TLS sessions and for connections between the load balancer and the customer backend instances For information see: http://docsawsamazoncom/ElasticLoadBalancing/latest/DeveloperGuide/elbhttpsload balancershtml Alternatively customers can configure Amazon ELB in basic TCPmode and passthrough encrypted sessions to back end instances where the encrypted session is terminated In this architecture customers manage their own certificates and TLS negotiation policies in applications running in their own instances For information see: http://docsawsamazoncom/ElasticLoadBalancing/latest/DeveloperGuide/elblistenerconfightml Conclusion The AWS cloud platform provides a number of important benefits to UK public sector organisations and enables you to meet the objectives of the HSCIC IG SoC requirements While AWS delivers these benefits and advantages through our services and features under the aforementioned ‘se curity IN the cloud’ shared responsibility model the individual organisations connecting to HSCIC are ultimately responsible for controls and assurance for the IG SoC requirements Using the information presented in this whitepaper we encourage you to use AWS services for your organisations to manage security and the related risks appropriately For AWS security is always our top priority We deliver services to hundreds of thousands of businesses including enterprises educational institutions and government agencies in over 190 countries Our customers include government agencies financial services and healthcare providers ArchivedAmazon Web Services – Using AWS in the context of UK Healthcare IG SoC process May 2016 Page 24 of 24 who leverage the benefits of AWS while retaining control and responsibility for their data including some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it and the security configuration environment AWS customers can build their own secure applications and store content securely on AWS Additional Resources To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at: AWS Compliance: http://awsamazoncom/compliance AWS Security Center: http://awsamazoncom/security AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructorled classes Further information on AWS training is available at http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with best practices for building secure and reliable cloudbased applications using AWS technology Further information on AWS certifications is available at http://awsamazoncom/certification/ If further information is required please contact AWS: https://awsamazoncom/contactus/ or contact the local AWS account representative Document Revisions None
|
General
|
consultant
|
Best Practices
|
Using_Windows_Active_Directory_Federation_Services_ADFS_for_Single_SignOn_to_EC2
|
Step by Step: Single Signon to Amazon EC2Based NET Applications from an On Premises Windows Domain Dave Martinez April 2010 July 2021: This historical document is provided for reference purposes only Certain links to related information might no longer be available Jointly sponsored by Amazon Web Services LLC and Microsoft CorporationContents About the author ii Introduction 1 Important values worksheet 4 Scenario 1: Corporate application accessed internally 5 Configuration 7 Scenario 2: Corporate application accessed from anywhere 31 Configuration 33 Test 48 Scenario 3: Service provider application 49 Configuration 50 Scenario 4: Service provider application with added security 71 Configuration 72 Scenario 5: corporate application accessed internally (AD FS 20) 80 Configuration 82 Test 95 Appendix A: Sample federated application files 95 **DEFAULTASPX** 96 **WEBCONFIG** 99 **DEFAULTASPXCS** 101 Appendix B: Certificate verification troubleshooting 108 About the author Dave Martinez (dave @davemartineznet ) is Principal of Martinez & Associates a technology consultancy based in Redmond Washington Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 1 Introduction This document provides step bystep instructions for creating a test lab demonstrating identity federation between an on premise s Windows Server Active Directory domain and an ASPNET web application hosted on Amazon’s Elastic Compute Cloud (EC2) service using Microsoft’s Active Directory Federation Services (AD FS) technology A companion document describing the rationale for using AD FS and EC2 together is required pre reading and is available here The docu ment is organized in a series of scenarios with each building on the ones before it It is strongly recommended that the reader follow the document’s instructions in the order they are presented The scenarios covered are: • Corporate application accessed internally — Domain joined Windows client (for example in the corporate office) accessing an Amazon EC2 hosted application operated by same company using AD FS v11 • Corporate application accessed from anywhere — External not domain joined client ( for example at the coffee shop) accessing the same EC2 hosted application using AD FS v11 with an AD FS proxy In addition to external (forms based) authentication the proxy also provides added security for the corporate federation server • Service provider a pplication — Domain joined and external Windows clients accessing an EC2 hosted application operated by a service provider using one AD FS v11 federation server for each organization (with the service provider’s federation server hosted in EC2) and a fe derated trust between the parties • Service provider application with added security — Same clients accessing same vendor owned EC2 hosted application but with an AD FS proxy deployed by the software vendor for security purposes • Corporate application acces sed internally (AD FS 20) — Domain joined Windows client accessing EC2 based application owned by same organization (same as Scenario 1) but using the AD FS 20 as the federation server and the recently released Windows Identity Foundation (WIF) NET lib raries on the web server Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 2 Some notes regarding this lab: • To reduce the overall computing requirements for this lab AD FS federation servers are deployed on the same machines as Active Directory Domain Services (AD DS) domain controllers and Active Direct ory Certificate Services (AD CS) certificate authorities This configuration presents security risks In a production environment it is advisable to deploy federation servers domain controllers and certificate authorities on separate machines • This lab i ncludes a fully functional Public Key Infrastructure (PKI) deployment using Active Directory Certificate Services PKI is a critical foundational element to a production ready federation deployment Note that: o This lab uses a single tier certificate hiera rchy Note that a two tier certificate hierarchy with an offline certificate authority (CA) responsible for the organization root certificate would be more secure but is outside the scope of this lab o Also this lab uses CA issued certificates (chained to an internal root CA certificate) for SSL server authentication This requires distribution of the root CA certificate to all clients that access those web servers to avoid SSL related errors In a production deployment it is preferable to use certificat es that chain to a third party root certificate (from Verisign RSA and so on ) that is already present in Windows operating systems since this alleviates the need to distribute root CA certificates • This lab also includes a fullyfunctional Domain Name Services (DNS) deployment using Microsoft DNS Server DNS is also a critical foundational element to a production ready federation deployment Note that: o This lab uses fictional DNS domains which internet name servers resolve to the microsoftcom website breaking the lab functionality Thus the lab simulates resolution of external DNS names by using DNS forwarding from domain DNS instances to a hypothetical “ internet DNS” server that you run on one of the EC2 hosted web serv ers While useful in the context of this lab DNS forwarding is not a requirement of a functional federation deployment • To varying degrees every scenario covered in this lab requires inbound internet connectivity to the corporate federation servers whic h will reside inside your organization’s firewall Before proceeding make sure you have access to an external/internet IP address with open ports 80 and 443 for Scenario 1 and port 443 only for Scenarios 2 through 5 Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premise s Windows Domain 3 • This lab will require a total of thr ee local computers In this lab Hyper V virtualization technology in Windows Server 2008 was used to keep physical machine requirements down • To simplify the recording of important values you must type during configuration please use the Important values worksheet on the next page Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 4 Important values worksheet Machine 0: Amazon EC2 Lab Management PC Name Value 1 External IP address Machine 1: Adatum Internal Server Name Value 2 Adatum Administrator password 3 Internal static IP address 4 Alan Shen’s password 5 External IP address Machine 2: Domain joined Client Name Value 6 Internal IP address 7 External IP address Machine 3: Adatum Web Server Name Value 8 Elastic (public) IP address 9 Administrator password Machine 4: Adatum FS Proxy Name Value 10 Elastic (public) IP address Machine 6: Trey Research Federation Server Name Value 11 Elastic (public) IP address Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 5 Name Value 12 Administrator password Machine 7: Trey Research Web Server Name Value 13 Elastic (public) IP address Machine 8: Adatum Federation Server (AD FS 20) Name Value 14 External IP address Scenario 1: Corporate application accessed internally Alan Shen an employee for Adatum Corporation will use the Active Directory domain joined computer in his office to access an ASPNET web application hosted on Windows Server 2008 in Amazon EC2 Using AD FS provides Adatum users access to the application without any additional login requests and without requi ring that the web server be domain joined using Amazon’s Virtual Private Cloud (VPC) service This scenario requires three computers: 1 Adatum Internal Server This local machine will perform multiple server roles including that of a domain controller a root certificate authority and an AD FS federation server that creates security tokens with which users access the federation application Specifically this machine will run: a Active Directory Domain Services (domain controller) b Domain Name Services (Active Directory integrated DNS server) c Active Directory Certificate Services (root CA) d Internet Information Services (web server) e Microsoft ASPNET 20 Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 6 f Microsoft NET Framework 20 g Active Directory Federation Services (Adatum identity provider) The AD FS v1 federation server is available in Windows Server 2003 R2 Windows Server 2008 and Windows Server 2008 R2 (Enterprise Editions or above) This lab used a trial Windows Server 2008 R2 Enterprise Edition Hyper V image which is available for dow nload here Note : To run Hyper V images you will need to have a base install of Windows Server 2008 (64 bit edition) or Windows Server 2008 R2 running Hyper V For more information on obtaining and installing the latest version of Hyper V please visit the Hyper V Homepage 2 Domain joined Client This local domain joined Windows client will be the machine Alan Shen uses to access the federated application The only client requirement is Internet Explorer (version 5 and above) or another web browser with Jscript and cookies enable d This lab used Internet Explorer 8 in a trial Windows 7 Enterprise ISO file available here 3 Adatum Web Server This machine based in Amazon EC2 will host the AD FS web agent and the Adatum sample federated web application In addition it will act as our general purpose “Internet DNS” server Specifically this machine will run: a Internet Information Services (web server) b Microsoft ASPNET 20 c Microsoft NET Framework 20 d AD FS claims aware web agent (as opposed to the agent for NT token applications which is not used in this guide) e Sample application (you will create the application files by copying content from this guide) f Domain Name Services (DNS server serving intern et DNS zones) Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 7 The AD FS v1 web agent is available in Windows Server 2003 R2 Windows Server 2008 and Windows Server 2008 R2 (Standard Editions or above) Amazon EC2 currently offers Windows Server 2003 R2 and Windows Server 2008 (Datacenter Edition) as guest operating systems This lab used Windows Server 2008 Configuration Machine 1: Adatum internal server The following configuration steps are targeted to Windows Server 2008 R2 If using a different version of Windows Server use these step s as a guideline only Initial install /configuration 1 Install Windows Server 2008 R2 onto your server computer or virtual machine 2 Log in to Windows Server with the local machine Administrator account and password This password automatically becomes the Ad atum domain administrator password once Active Directory is installed 3 Record the Adatum administrator password on Line 2 of the Important values worksheet 4 In the Initial Configuration Tasks window choose Provide computer n ame and domain 5 Choose Change 6 In the computer name field enter fs1 7 Choose OK twice 8 Choose Close 9 Choose Restart Now 10 Log back in to the machine with the Adatum administrator account and password Configure networking This computer has the following networking requirements: • Inbound internet connectivity (ports 80 and 443) through a static external IP address Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Doma in 8 • A static internal IP address to ensure that clients can properly access the domain DNS server • A subnet mask that will allow the other local computers in this lab to see the domain controller • A default gateway address in the IP address range of the subnet mask to enable DNS forwarding Contact your network administrator to request a static IP address subnet mask default gateway and to open ports 80 and 443 on the external IP address of the default gateway 11 In the Initial Configuration Tasks window choose Configure networking 12 Rightclick on the Local Area Connection and choose Properties 13 Double click on the Internet Protocol Version 4 list item to open TCP/IPv4 Properties 14 On the General tab choose the radio button Use the following IP address 15 In the IP address Subnet mask and Default Gateway fields enter the static IPv4 address subnet mask and default gateway address provided by your network administrator 16 In the Preferred DNS server field enter 127001 (which points the local DNS client to the local DNS server) 17 Choose OK twice Record your Adatum Internal Server static IP address on Line 3 of the Important values worksheet Install /configure Active Directory Domain Services (AD DS) 1 Close the Initial Configuration Tasks window; this will automatically open Server Manager 2 In Server Manager right click on Roles and select Add Roles to start the Add Roles Wizard 3 On the Select Server Roles page check the box next to Active Directory Domain Services 4 Choose the Add Required Features button to allow Server Manager to add NET Framework 351 to the installation pr ocess Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 9 5 Choose Next twice 6 Choose Install 7 On the Installation Results page choose the link for the Active Directory Domain Services Installation Wizard ( dcpromoexe ) 8 On the Choose a Deployment Configuration page select Create a new domain in a new forest 9 On the Name the Forest Root Domain page enter corpadatumcom 10 On the Set Forest Functional Level and Set Domain Functional Level pages leave the default setting of Windows Server 2003 11 On the Additional Domain Controller Options page leave DNS Server checked 12 When prompted about not finding an authoritative DNS zone c hoose Yes to continue 13 Complete the wizard keeping all other default values 14 When prompted restart computer 15 Once you are logged back into the computer choose Start > Administrative Tools > Active Directory Users and Computers 16 Under corpadatumcom right click on Users and select New > Group 17 In the Group Name field enter Managers 18 Choose OK 19 Right click Users again and choose New > User 20 In the First name field enter Alan 21 In the Last name field enter Shen 22 In the User logon name field enter alansh 23 Choose Next 24 Provide a password 25 Choose Next 26 Choose Finish 27 Record Alan Shen’s password on Line 4 of the Important values worksheet Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 10 28 Choose Users then right click on Alan Shen and choose Properties 29 On the General tab in the Email field enter alansh@adatumcom 30 On the Member of tab c hoose Add 31 In the Select Groups box enter Managers 32 Choose Check Names 33 Once verified choose OK twice Identify external IP address • Identify your external IP address You can ask your network administrator or visit http://wwwwhatismyipcom/ • Record your Adatum Internal Server ex ternal IP address on Line 5 of the Important values worksheet Install /configure Active Directory Certificate Services (AD CS) 1 In Server Manager right click on Roles and choose Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Certificate Services 3 On the Select Role Services page select Certification Authority and Certification Authority Web Enrollment 4 Choose the Add Required Features button to allow Ser ver Manager to add IIS to the installation process 5 On the Specify Setup Type page select Enterprise 6 On the Specify CA Type page select Root CA 7 On the Setup Private Key page select Create a new private key and accept the default cryptography setting s 8 On the Configure CA Name page in the Common Name for this CA field enter Adatum Certificate Server 9 Complete the wizard keeping all other default values 10 Choose Start > Run 11 In the Run box enter mmc and choose OK to start the Microsoft Management Console Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 11 12 In the File menu choose Add/Remove Snap in 13 Highlight the Certificates snap in and choose the Add button 14 Choose computer account and local computer in the pages that follow 15 Highlight the Certificate Templat es snap in and choose Add 16 Highlight the Certification Authority snap in and choose Add 17 Choose local computer in the page that follows and choose OK 18 Choose File > Save and save the new MMC console ( Console 1 ) to the machine desktop for future use 19 In Console 1 expand Certification Authority 20 Right click on Adatum Certificate Server and choose Properties 21 On the Extensions tab for the CRL Distribution Point (CDP) extension highlight the http:// certificate revocation list location in the list 22 Below the list click the Include in CRLs and Include in the CDP extension of issued certificates options and choose OK 23 Choose Yes to restart AD CS 24 In Console 1 expand Adatum Certificate Server 25 Right click on the Revoked Certificates folder and choose All Tasks > Publish 26 Choose OK to publish a new CRL with the enhanced CDP extension Enable double escaping for CRL website in IIS (Windows Server 2008 only) Note: This task pertains only to Windows Server 2008 If you are using Windows Server 2008 R2 this issue is automatically addressed by the AD CS install process By default Active Directory Certificate Services in Windows S erver 2008 and above generates Delta CRL files which update on a more frequent schedule (daily) than standard CRL files (weekly) The default file name used by AD CS for a Delta CRL file includes a plus (“+”) sign and in this lab this file is accessed over the internet By default IIS 7 (Windows Server 2008) and IIS 75 (Windows Server 2008 R2) reject URIs containing the plus character creating an incompatibility with AD CS Delta CRL files Step by Step: Single Sign on to Amazon EC2 Based NET Appli cations from an On Premises Windows Domain 12 To fix this the default request filter behavior of the website hosting the Delta CRL file must be modified AD CS in Windows Server 2008 R2 does this automatically If using Windows Server 2008 follow the procedure 1 Choose Start > Run 2 In the Run box enter cmd and choose OK to open a command prompt 3 Change the directory to c:\windows\system32\inetsrv At the command prompt enter the following and press Enter : appcmd set config “Default Web Site/CertEnroll” section:systemwebServe r/security/requestFiltering allowDoubleEscaping:true Configure AD CS certificate templates 1 In Console 1 choose Certificate Templates in the left navigation area 2 In the center pane right click on the Web Server certificate template and select Duplicat e Template 3 In the Duplicate Template dialog leave Windows Server 2003 Enterprise as the minimum CA for the new template and click OK 4 In Properties of New Template make the following changes: g On the General tab in the Template display name field enter Extranet Web Server h On the Request Handling tab check the box next to Allow private key to be exported 5 Choose OK to create the new template 6 In the center pane right click on the Web Server certificate template and choose Properties 7 In the Security tab c hoose Add 8 In the object names text box enter Domain Controllers and choose Check Names 9 Once verified c hoose OK 10 Back in the Security tab highlight the Domain Controllers list item 11 In the Allow column check the Read and Enroll permissions and click OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 13 12 Click Start > Administrative Tools > Services 13 Right click on Active Directory Certificate Services and choose Restart 14 In Console 1 in the left navigation area right click on Certificate Authority \Adatum Certificate Server \Certificate Templates and choose New > Certificate Template to Issue 15 Highlight Extranet Web Server from the list and c hoose OK Create server authentication certificate 1 In Console 1 right click on Certificates (Local Computer)/ Personal/Certificates and choose All Tasks > Request New Certificate 2 In the Certificate Enrollment Wizard choose Next twice 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown list and select Common name b In the Value field enter fs1corpadatumcom and choose Add c On the General tab in the Friendly name text box enter adatum fs ssl and choose OK 5 In the Certificate En rollment window check the box next to Web Server 6 Choose the Enroll button 7 Choose Finish 8 In Console 1 check for the new certificate with friendly name “ adatum fs ssl ” in Certificates (Local Computer)/Personal/Certificates Create AD FS token signing certificate While it is possible to use the same certificate for server authentication and token signing security best practice suggests using distinct certificates for each function In this example however you will use the same Web Server cert ificate template to issue the token signing certificate 1 In Console 1 right click on Certificates (Local Computer)/Personal/Certificates and choose All Tasks > Request New Certificate Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 14 2 In the Certificate Enrollment Wizard choose Next twice 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown list and select Common name b In the Value field enter Adatum Token Signing Cert1 c Choose Add d On th e General tab in the Friendly name text box enter adatum ts1 e Click OK 5 In the Certificate Enrollment window check the box next to Web Server 6 Choose the Enroll button 7 Choose Finish 8 In Console 1 check for the new certificate with friendly name adatum ts1 in Certificates (Local Computer)/Personal/Certificates Install Active Directory Federation Services (AD FS) 1 In Server Manager right click on Roles and select Add Roles to start the Add Roles W izard 2 On the Select Server Roles page check the box next to Active Directory Federation Services 3 On the Select Role Services page check the box next to Federation Service 4 Click the Add Required Role Services button to allow Server Manager to add IIS features to the installation process 5 Choose Next 6 On the Choose a Server Authentication Certificate page highlight the existing certificate issued to fs1corpadatumcom with the intended purpose Server Authentication 7 Choose Next 8 On the Choose a Token Signing Certificate page highlight the existing certificate issued to Adatum Token Signing Cert1 9 Choose Next Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 15 10 Accept all other defaults and choose Install Initial AD FS configuration 1 Choose Start > Administrative Tools > Active Directory Federation Services 2 Right click on Account Stores under Federation Service/Trust Policy/My Organization and select New > Account Store 3 In the Add Account Store Wizard leave AD DS as the store type and click through to add the local AD domain 4 Right click on My Organization/Organization Claims and choose New > Organization claim 5 In the Claim name field choose PriorityUsers 6 Choose OK 7 Right click on My Organization/Account Stores/Active Directory and choose New > Group Claim Extraction 8 Choose Add 9 Enter Managers into the text box 10 Choose Check Names 11 Once verified choose OK 12 In the Map to this Organization Claim dropdown list select PriorityUsers 13 Choose OK 14 Choose My Organization/Account Stores/Active Directory 15 In the right hand pane right click on the Email organization claim and choose Properties 16 In the Claim Extraction Properties dialog box check the box next to Enabled 17 In the LDAP attribute field type mail 18 Choose OK Ad Adatum internal server URL to in tranet zone in domain group policy This enables domain client browsers to access the federation server at https://fs1corpadatumcom using Integrated Windows Authentication 1 Choose Start > Administrative Tools > Group Policy Management Step by Step: Singl e Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 16 2 Right click on Forest:corpadatumcom/Domains/corpadatumcom/Default Domain Policy and choose Edit 3 Choose User Configuration/Policies/Windows Settings/Internet Explorer Maintenance/Security 4 In the left hand pane right click on Security Zones and Content Ratings and choose Properties 5 In the Security Zones and Privacy section choose the radio button next to Import the current security zones and privacy settings 6 Choose Continue 7 Choose Modify Settings 8 In the Internet Properties window on the Security tab highlight the Local Intranet zone and choose the Sites button 9 Choose Advanced 10 In the Add this website to the zone text box enter https://fs1corpadatumcom 11 Choose Add 12 Choose Close 13 Choose OK twice Machine 2: domain joined client Note : The following configuration steps are targeted to Windows 7 If using a different version of Windows use t hese steps as a guideline only Initial install /configuration 1 Install Windows 7 onto your client computer or virtual machine 2 Choose Start > Control Panel > Network and Internet > Network and Sharing Center 3 On the left side of the window choose Change A dapter Settings 4 Right click on Local Area Connection 5 Choose Status Step by Step: Single Sign on to Amazon EC2 Based NET Applications from a n On Premises Windows Domain 17 6 Choose the Details button Note the IPv4 address 7 Record your Domain joined Client internal IP address on Line 6 of the Important values worksheet 8 Choose Close 9 Choose Properties 10 Double click on the Internet Protocol Version 4 list item to open TCP/IPv4 Properties 11 On the General tab click the radio button to Use the following DNS server address 12 In the Preferred DNS server field enter the value from Line 3 of the Important values worksheet 13 Choose OK twice 14 Choose Start 15 Right click on Computer and choose Properties 16 In the Computer name domain and workgroup settings area choose the link to Change Settings 17 In the System Properties window on the Computer Name tab choose the Change button 18 In the Computer Name field enter client 19 In the Member of area choose the radio button for Member of Doma in 20 In the Domain text box enter CORP 21 Choose OK 22 Enter the Adatum domain administrator user name and password from Line 2 of the Important values worksheet 23 Choose OK 24 Follow the prompts to restart the computer 25 Log onto the computer as CORP \Administrator using the password from Line 2 of the Important values worksheet Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 18 Identify external IP address • Identify the client’s external IP address One way is to visit http://wwwwhatismyipcom • Record your Domain joined Client external IP address on Line 7 of the Important values worksheet Check certificate /group policy settings 1 Click Start 2 In the Search programs and files box enter mmc 3 Press Enter to start the Microsoft Management Console 4 In the File menu choose Add/Remove Snap in 5 Highlight the Certificates snap in and choose the Add button 6 Choose computer account and local computer in the pages that follow 7 Choose OK 8 Choose File > Save and save the new MMC console (Console 1) to the machine desktop for future use 9 In Console 1 check in Certificates (Local Computer)/Trusted Root Certificate Authorities/Certificates for the presen ce of the Adatum Certificate Server root certificate It should have been placed here automatically by the domain controller 10 Open Internet Explorer 11 On the Tools menu select Internet Options 12 On the Security tab choose the Local Intranet zone icon 13 Choose the Sites button 14 Choose the Advanced button and ensure that https://fs1corpadatumcom is listed as a website in this zone 15 Choose Start 16 Next to the Shutdown button choose the arrow and then choose Switch User 17 Log in as CORP \alansh using the password from Line 4 of the Important values worksheet Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 19 Machine 3: Adatum Web Server Create/ configure your Amazon EC2 account You can access EC2 virtual machines and the EC2 Console management application on any computer with internet access In this lab the external IP address of the computer used to access EC2 is used in firewall settings on EC2 to limit inbound RDP access to just the lab administrator You can determine this machine’s external IP address by visiting a site like http://wwwwhatismyipcom Record your EC2 management external IP address on Line 1 of the Important values worksheet 1 Create an Amazon Web Services (AWS) account by visiting http://awsamazoncom and choosing the Create an AWS Account button 2 Visit https://awsamazoncom/ec2/ and choose Get started with Amazon EC2 Create a Windows Server instance in EC2 1 In the EC2 Console choose the Launch Instance button to launch the Request Instances Wizard 2 Choose the Community AMIs tab and in the adjacent text box enter amazon/Windows Server2008 3 Find the entry for amazon/Windows Server2008 i386Base<version#> and choose the Select button 4 On the Instance Details page leave the defaults selected 5 On the Advanced Instance Details page accept the default settings 6 On the Create Key Pair page choose Create a new Key Pair 7 Enter ADFSkey as your key pair name 8 Choose Create and download your key pair button 9 Save the resulting ADFSkeypem file to your desktop 10 On the Configure Firewall page choose Create a New Security Group 11 Name the new group Adatum Web Server 12 Choose the Select box and add the following allowed connections: Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 20 Application Transport Port Sour ce Network/CIDR * RDP TCP 3389 Lab management external IP/32 ** HTTPS TCP 443 Domain client external IP/32 *** DNS UDP 53 All internet *** *Classless Inter Domain Routing (CIDR) addresses allow you to scope inbound access to an EC2 instance to a specific IP address or subnet range In this scenario we can limit inbound access to only the Adatum domain network or just the client computer The CIDR portion of the IP address scopes the allowed incoming connections to your liking; for example 1234/32 allows only the specific IP address 1234 while 1234/24 allows access to any computer in the 123 subnet **This is the external IP addr ess of the machine being used to access the Amazon EC2 images via Remote Desktop recorded on Line 1 of the Important values worksheet ***This is the external IP address of the domain joined client recorded on Line 7 of the Important values worksheet ****This setting is the equivalent of the address 0000/0 and allows access from any internet IP address Because you are mimicking internet DNS you use this setting 13 Choose Continue 14 In the Review page choose Launch to start the instance 15 Choose Close 16 Choose Instances in the left navigation bar to see the status of your instance Associate an Elastic IP address 1 In the EC2 Console choose the Elastic IPs link in the left navigation area 2 Choose the Allocate New Address button 3 Choose the Yes Allocate button 4 Once allocated right click on the address and choose Associate Address Step by Step: Single Sign on to A mazon EC2 Based NET Applications from an On Premises Windows Domain 21 5 Choose the Adatum Web Server instance ID from the dropdown list and choo se Associate 6 Record the Adatum Web Server Elastic IP address on Line 8 of the Important values worksheet Get Windows administrator password 1 In the EC2 Console choose Instances in the left navigation area 2 Once the Status shows as “ running ” and your Elastic IP address is listed in the Public DNS column right click the Adatum Web Server instance and choose Get Windows Password 3 On your desktop open the ADFDSkeyPEM file with Notepad and copy the entire contents of the file (including the Begin and End lines such as "BEGIN RSA PRIVATE KEY ") 4 In the EC2 Console paste the text into the Retrieve Default Windows Administrator Password window 5 Click inside the text box o nce to enable the Decrypt Password button 6 Choose Decrypt Password 7 Copy the Computer User and Decrypted Password information into a text file and save it to your desktop 8 Choose Close in the Retrieve Password window Access instance using remote desktop connection Note : The default RDP client in Windows XP does not support server authentication which is required for access To download a newer client visit here 1 Choose Start > All Programs > Accessories > Communication > Remote Desktop Connection 2 In the Computer text box copy/paste or enter the Computer Name from your text file (for example ec2123 45678910compute 1amazonawscom ) 3 Choose Connect Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 22 4 In the log in dialog box enter Administrator for user name and the Decrypted Password from your text file into the Password field taking care to get the capitalization correct 5 Choose OK 6 In the Set Network Location window choose Public Location 7 Choose Close Optional 1 Once inside the instance change the Administrator password by pressing CTRL ALTEND and clicking the Change a password link 2 Record the Adatum Web Server Administrator password on Line 9 of the Important values worksheet Optional 1 Turn off the Internet Explorer Enhanced Security Configuration for administrators 2 In Server Manager on the Server Summary page under Security Information choose Configure IE ESC 3 Under Administrators choose the Off radio but ton 4 Choose OK Adjust clock settings Note : Federation depends on the accuracy of timestamps used in signed security tokens 1 Right click the Windows Taskbar and select Properties 2 On the Notification Area tab check the box to show the Clock 3 Choose OK 4 Right click over the clock in the taskbar and choose Adjust Date/Time 5 On the Date and Time tab choose the Change time zone button and adjust to your time zone 6 Choose OK twice Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 23 Install web server role 1 In Server Manager right click on Role in the left navigation area and select Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Web Server (IIS) 3 Choose the Add Required Features button to allow Server Manage r to add the Windows Process Activation service to the install 4 Choose Next twice 5 On the Select Role Services page check the box next to ASPNET 6 Choose the Add Required Role Services button 7 Choose Next 8 Choose Install 9 Choose Close to complete the i nstall Add record for Adatum Internal Server to hosts file The web server needs to periodically access the federation server in order to download trust policy information Therefore the web server needs to resolve the federation server DNS name Since the EC2 based web server is not a member of the Adatum corporate subnet it needs to resolve the external IP address of the federation server Here we handle this by using a host file entry A second perimeter DNS server or a split DNS configuration for the corpadatumcom zone could also be used here 1 Open the c:\Windows\system32\drivers\etc directory folder 2 Right click on the hosts file and choose Open 3 Select Notepad as the program and choose OK 4 Add the name and external IP address of the Adatu m Internal Server from Line 5 of the Important values worksheet to the hosts file as shown in the following example: 12345678910 fs1corpadatumcom 5 Save and close the file 6 Create a shortcut to the hosts file on the de sktop for future use Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 24 Install Adatum root CA certificate Note : To successfully communicate with the federation server the web server has to trust the SSL server authentication certificate at fs1corpadatumcom issued by the Adatum CA 1 Open Internet Explorer and go to https://fs1corpadatumcom/certsrv/ 2 In the Certificate Error page c hoose the link to Continue to this website 3 At the login prompt log in as administrator with the password from Line 2 of the Important values worksheet to reach the Active Directory Certificate Services home page 4 At the bottom of the page click the link to Download a CA certificate certificate chain or CRL 5 On the next page click the link to Download CA certificate 6 Save the resulting certnewcer file to the desktop Leave the AD CS web application open for use in upcoming steps 7 Choose Start > Run 8 In the Run box enter mmc and choose OK to start the Microsoft Management Console 9 In the File menu select Add/Remove Snap in 10 Highlight the Certificates snap in and choose the Add button 11 Choose the computer account and local computer in the pages that follow 12 Highlight the Certificates snap in again and choose the Add button 13 Choose My user account in the page that follows and click OK 14 Choose File > Save and save the new MMC console (Console 1) to the machine desktop for future use 15 In Console 1 right click on Certificates (Local Computer)/Trusted Root Certification Authorities/Certificates and choose All Tasks > Import to launch the Certificate Import Wizard 16 On the File to Import page choose Browse 17 Find the certnewcer file on the desktop and choose Open 18 Choose Next twice Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 25 19 Choose Finish 20 Choose OK to complete the import process Save image To save some time later you will use an image of this server in this state as a starting point for a future server instance 1 In the EC2 Console choose Instances in the left navigation area 2 Right click on the instance for the Adatum Web Server and choose Create Image (EBS AMI) 3 In the Image Name field enter webserver and choose Create This Image 4 Choose the View Pending Image link to see the status of your saved image Add AD FS C laims aware application agent 1 In Server Manager right click on Role in the left navigation area and select Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Federation Services 3 On the Select Role Services page check the box next to Claims aware Agent 4 Choose Next 5 Choose Install 6 Choose Close to complete the install Create sample application You c an use the sample claims aware application provided in this document to test your federation scenarios The claims aware application is made up of three files: • defaultaspx • webconfig • defaultaspxcs 1 Choose Start > My Computer 2 Create a new folder in c:\inetpub called adfsv1app Save the files to the c:\inetpub\adfsv1app directory Step by Step: Single Sign on to Amazon EC2 Base d NET Applications from an On Premises Windows Domain 26 3 The sample application code and assembly steps can be found in Appendix A Create server authentication certificate 1 Back in Internet Explorer choose Home in the upper right corner of the Certificate Services web application 2 Choose the link to Request a certificate 3 Choose the link for advanced certificate request 4 Choose the link to Create and submit a request to th is CA 5 If prompted about the page requiring HTTPS choose OK 6 If prompted to run the Certificate Enrollment Control addon choose Run 7 On the Advanced Certificate Request page in the Certificate Template dropdown choose Extranet Web Server 8 In the Identifying Information section in the Name field enter adfsv1appadatumcom and leave the other fields blank 9 In the Additional Options section in the Friendly Name field enter adatum web ssl and choose Submit 10 Choose Yes to complete the req uest process; the certificate will be issued automatically 11 Choose the link to Install this certificate 12 Choose Yes on the warning dialog 13 In Console 1 choose Certificates (Current User)/Personal/Certificates 14 The certificate for adfsv1appadatumcom appears in the right hand pane Move server authentication certificate to local computer certificate store In Windows Server 2008 the option in the AD CS Web Enrollment pages to automatically save certificates to the Local Computer certificate store was removed AD FS requires that certificates be stored in the Local Computer certificate store This process moves the certificate to the proper location 1 In Console 1 right click on the adfsv1appadatumcom certificate and choose All Tasks > Export to launc h the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 27 3 On the Export File Format page leave the default setting 4 Provide a password 5 On the File to Export page choose Browse 6 Choose Desktop 7 In the File n ame field enter adatum web ssl 8 Choose Save > Next > Finish > OK to complete the export process 9 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard 10 On the File to Im port page choose Browse and find adatum web sslpfx on the desktop 11 Choose Open 12 Choose Next 13 After entering the password choose Next > Next > Finish> OK to complete the import process Add sample application to IIS 1 Choose Start > Administrative Tools > Internet Information Services (IIS) Manager 2 Right click on the Sites folder in the left navigation area and choose Add Web Site 3 In the Site name field enter ADFSv1 app 4 In the Application Pool field choose Select 5 In the Appl ication pool dropdown list choose Classic NET AppPool 6 Choose OK 7 In the Content Directory section choose the button to the right of the Physical path field browse to c:\inetpub\adfsv1app 8 Choose OK 9 In the Binding section in the Type dropdown choose https 10 In the SSL certificate dropdown choose adatum web ssl 11 Choose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 28 Save image 1 In the EC2 Console choose Instances in the left navigation area 2 Right click on the instance for the Adatum Web Server and choose Create Image (EBS AMI ) 3 In the Image Name field enter webserver2 4 Choose Create This Image 5 Choose the View Pending Image link to see the status of your saved image Add DNS server role This web server will run a DNS Server that will serve the internet DNS zones 1 In Server Manager right click on Role in the left navigation area and choose Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to DNS Server 3 On the warning about static IP addresses choose Install DNS Server anyway (you have an EC2 Elastic IP address but Windows doesn’t know this) 4 Choose Next > Next > Install 5 Choose Close to complete the install 6 Choose Start > Administrative Tools > DNS 7 In the left navigation area right click on the Forward Lookup Zones folder and choose New Zone to start the New Zone Wizard 8 On the Zone Type page leave the default setting of Primary zone 9 On the Zone Name page enter adatumcom in the text box 10 Choose Next 11 Accept the defaults on the Zone File and Dynamic Updates pages 12 Choose Finish Add record for sample application in internet DNS 1 In DNS Manager right click on <Machine name>/Forward Lookup Zones/adatumcom and select New Host (A or AAAA) 2 In the New Host Name field enter adfsv1app Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 29 3 In the IP address field enter the Elastic IP address for the Adatum Web Server from Line 8 of the Important values worksheet 4 Choose Add Host > OK > Done Machine 1: Adatum internal server Add sample application to AD FS 1 Right click on My Organization/Applications and choose New > Application 2 Enter the following in the Add Application Wizard : a On the Application Type page leave Claims aware application as the application type b On the Application Details page in the Application display name field enter ADFSv1 app c In the Application URL field enter https://adfsv1appadatumcom/ d On the Accepted Identity Claims page check the box next to User principal name (UPN) e Choose Next twice f Choose Finish 3 Choose ADFSv1 app under Applications 4 In the right hand window right click on the PriorityUsers and Email claims and choose Enable Add DNS forwarder from Adatum domain DNS to internet DNS 1 Choose Start > Administrative Tools > DNS 2 Choose FS1 in the left navigation area 3 Rightclick on Forwarders in the right hand pane and choose Properties 4 On the Forwarders tab choose Edit 5 Enter the Adatum Web Server Elastic IP address from Line 8 of the Important values worksheet 6 Press Enter Watch for the word “validating” to change to “OK” in the Edit Forwarders window 7 Choose OK twice to complete the forwarder setup Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 30 Configure firewall settings The federation server must have inbound connectivity from the internet (port 443) in order to communicate with the EC2 based web server However the private keys a federation server uses to sign security tokens are sensitive items that s hould be protected as much as possible To reduce the security threat the open ports represent we use firewall rules to scope down the allowable inbound communications Here you do this with the Windows Server 2008 integrated firewall 1 Choose Start > Adm inistrative Tools > Windows Firewall with Advanced Security 2 Choose Inbound Rules in the left navigation area 3 In the right hand pan under Actions choose Filter by Group 4 Choose Filter by Secure World Wide Web Services (HTTPS) 5 In the center pane righ tclick on the World Wide Web Services (HTTPS Traffic In) rule and choose Properties 6 In the Properties dialog box choose the Scope tab 7 In the Remote IP address section click the radio button next to These IP addresses 8 Choose Add 9 In the IP Address window in the This IP address or subnet field enter the Elastic IP address of the Adatum Web Server from Line 8 of the Important values worksheet 10 Choose OK 11 Choose Add 12 In the same field enter the internal IP address of the domain joined client from Line 6 of the Important values worksheet 13 Choose OK twice 14 In the right hand pane under Actions choose Filter by Group and select Filter by World Wide Web Services (HTTP) 15 In the center pane right click on the World Wide Web Services (HTTP Traffic In) rule and choose Properties 16 In the Properties dialog box choose the Scope tab Step by Step: Single Sign on to Amazon EC2 Based NET Applica tions from an On Premises Windows Domain 31 17 In the Remote IP address section choose the radio button next to These IP addresses 18 Choose Add 19 In the IP Address window in the This IP address or subnet field enter the Elastic IP address of the Adatum Web Server from Line 8 of the Important values worksheet 20 Choose OK Note : Port 80 is required for web server access to the Adatum CA certificate revocation list (CRL); CRLs cannot be served over HTTPS Test To test the scenario : 1 Log in to the domain joined client as Alan Shen ( alansh ) using the password from Line 4 of the Important values worksheet 2 In Internet Explorer enter https://adfsv1appadatumcom into the address bar 3 Press Enter You should be presented with access to the Adatum claims aware a pplication hosted on EC2 without being asked for a password Scroll down to note the claims that were passed to the application including the PriorityUsers and Email claims based on Active Directory group membership and attributes If you run into errors it’s possible that you are having certificate verification issues See Appendix B for more information Scenario 2: Corporate application accessed from anywhere This case is similar to Scenario 1 in that the scenario invo lves a corporate user needing federated access to an ASPNET application hosted by their employer on Amazon EC2 However in Scenario 2 Alan Shen needs access from a computer that is not joined to the Adatum domain – maybe the user’s personal computer at h ome or laptop in a coffee shop The use of an AD FS federation server proxy (or FS proxy) which sits in a Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 32 perimeter network outside the domain enables Adatum to handle federation functions for users regardless of their physical location by proxying comm unication with the internal federation server Using an FS proxy also improves security by keeping the number of computers with inbound access to the federation server to just the web server(s) and the proxy Without the FS proxy all external clients woul d need inbound port 443 access to the federation server This scenario adds two additional computers to the lab • Adatum FS Proxy This machine runs in a perimeter network and is accessible from any device with internet connectivity It will route user requests from the internet to the corporate federation server In our case we will host this machine on Amazon EC2 Specifically thi s machine will run: o Internet Information Services (web server) o Microsoft ASPNET 20 o Microsoft NET Framework 20 o Active Directory Federation Services (Adatum federation server proxy) The AD FS v1 FS proxy is available in Windows Server 2003 R2 Windows Server 2008 and Windows Server 2008 R2 (Enterprise Edition or above) Amazon EC2 currently offers Windows Server 2003 R2 and Windows Server 2008 (Datacenter Edition) as guest operating systems This lab used Windows Server 2008 Also in an additio nal effort to reduce external access to internal servers we will host the Adatum certificate revocation list (CRL) files here on the Adatum FS Proxy This will allow us to close port 80 inbound on the internal server • External Client This client computer is used to access the federated application from outside the Adatum domain to simulate the user experience from a coffee shop internet kiosk or home based computer The only requirement is Internet Explorer (version 5 and above) or another web browser wi th Jscript and cookies enabled In this lab we used the computer hosting the Adatum domain Hyper V images which was running Windows Server 2008 Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 33 Configuration Machine 1: Adatum internal serve r Create FS proxy client auth certificate template An FS proxy uses a client authentication certificate to securely communicate with federation servers 1 In Console 1 choose Certificate Templates 2 In the center pane right click on the Computer certificate template and choose Duplicate Template 3 In the Duplicate Tem plate dialog leave Windows Server 2003 Enterprise as the minimum CA for the new template and choose OK 4 In Properties of New Template make the following changes: a On the General tab in the Template display name field enter Adatum Proxy Client Auth b On the Request Handling tab check the box next to Allow private key to be exported c On the Subject Name tab choose the radio button next to Supply in the request d Choose OK in the warning about allowing user defined subject names with automatic issuance 5 Choose OK to create the new template 6 In Console 1 right click on the Certificate Authority \Adatum Certificate Server \Certificate Templates folder and select New > Certificate Template to Issue 7 Highlight Adatum Proxy Client Auth from the list 8 Choose OK Add new location to CDP exten sion in Adatum CA Later you will create a new website for Adatum’s CRL files This new website location must be referenced in all certificates issued by Adatum’s CA This is done by modifying the CDP extension on the CA For performance reasons you’ll also remove other existing CDP locations Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 34 1 In Console 1 right click on Certification Authority/Adatum Certificate Server and choose Properties 2 On the Extensions tab in the Select extension dropdown ensure the CRL Distribution Point (CDP) extension is selected 3 Choose the Add button 4 In the Add Location window in the Location field enter http://crladatumcom/ making sure to include the forward slash at the end 5 Choose the Insert button which adds the <CaName> variable (shown in the Variable dropdown list) as the next element of the address 6 Choose the Variable dropdown and choose <CRLNameSuffix> 7 Choose Insert 8 Choose the Variable dropdown and choose <DeltaCRLAllowed> 9 Choose Insert 10 Back up in the Location field place the cursor at the end of the address and complete the URL by entering crl 11 Choose OK 12 The final address you added should be: http://crladatumcom/ <CaName><CRLNameSuffix><DeltaCRLAllowed> crl 13 Back on the Exte nsions tab highlight the new location 14 Check the boxes next to Include in CRLs and Include in the CDP extension of issued certificates 15 Highlight the existing http://<ServerDNSName> location and then uncheck the boxes next to Include in CRLs and Includ e in the CDP extension of issued certificates 16 Highlight the existing ldap:// location and then uncheck the boxes next to Include in CRLs and Include in the CDP extension of issued certificates 17 Choose OK 18 Choose Yes to restart AD CS Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 35 Reissue Adatum CRL file 1 In Console 1 right click on Certification Authority/Adatum Certificate Server/Revoked Certificates and choose All Tasks > Publish 2 Choose OK to publish a new CRL with the enhanced CDP extension Create a new AD FS token signing certificate 1 In Console 1 right click on Certificates (Local Computer)/Personal/Certificates and choose All Tasks > Request New Certificate 2 In the Certificate Enrollment Wizard choose Next twice 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown and choose Common name b In the Value field enter Adatum Token Signing Cert2 c Choose Add d On the General tab in the Friendly name text box enter adatum ts2 e Choose OK 5 In the Certificate Enrollment window check the box next to Web Server 6 Choose the Enroll button 7 Choose Finish 8 In Console 1 check for the new certificate with friendly name adatum ts2 in Certificates (Local Computer)/Personal/Certificates Replace token signing certificate in AD FS 1 Choose Start > Administrative Tools > Active Directory Federation Services 2 Right click on Federation Service and choose Properties 3 On the General tab in the Token signing certificate section choose Select 4 Choose the certificate listed as adatum ts2 5 Choose OK 6 Choose Yes to complete the process Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an OnPremises Windows Domain 36 7 Right click on Federation Service/Trust Policy and choose Properties 8 On the Verification Certificates tab choose the old Adatum Token Signing Cert1 9 Choose Remove 10 Choose OK Machine 4: Adatum FS proxy Create a new instance from webserver AMI 1 In the EC2 Console choose the AMIs link in the left navigation area 2 Right click on the webserver AMI shown and choose Launch Instance to start the Request Instances Wizard 3 On the Instance Details page leave the defaults selected 4 On the Advanced Ins tance Details page accept the default settings 5 On the Create Key Pair page leave the default to use your existing key pair 6 On the Configure Firewall page choose Create a New Security Group 7 Name the new group Adatum FS Proxy 8 Choose the Select dropdown and add the following allowed connections: Application Transport Port Source Network/CIDR RDP TCP 3389 Lab management external IP/32* HTTP TCP 80 All internet HTTPS TCP 443 All internet *This is the external IP address of the machine being used to access the Amazon EC2 images via Remote Desktop recorded on Line 1 of the Important values worksheet 9 Choose Continue 10 In the Review page choose Launch to start t he instance 11 Choose Close Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 37 12 Choose Instances in the left navigation bar to see the status of your instance Associate an Elastic IP address 1 In the EC2 Console choose the Elastic IPs link in the left navigation area 2 Choose the Allocate New Address button 3 Choose the Yes Allocate button 4 Once allocated right click on the address and choose Associate Address 5 Choose the Adatum FS Proxy instance ID from the dropdown 6 Choose Associate 7 Record the Adatum FS Proxy Elastic IP address on Line 10 of the Important values worksheet Add custom firewall permission 1 In the EC2 Console choose Security Groups in the left navigation bar 2 Choose the Adatum FS Proxy security group to display its current settings 3 In the lower pane add the following permission: Method Protocol From Port To Port Source (IP or Group) Custom TCP 445 445 Internal Server external IP/32 * *This connection enables SMB over TCP used to copy CRL files from Adatum Internal Server using the Administrator account Use the Adatum Internal Server external IP address on Line 5 of the Important values worksheet Machine 1: Adatum internal server Modify firewall settings You must allow the FS proxy to communicate with the federation server and you can now close port 80 1 Choose Start > Administrative Tools > Windows Firewall with Advanced Security 2 Choose Inbound Rules in the left navigation area Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 38 3 In the right hand pan e under Actions choose Filter by Group and select Filter by Secure World Wide Web Services (HTTPS) 4 In the center pane right click on the World Wide Web Services (HTTPS Traffic In) rule and choose Properties 5 In the Properties dialog box choose the Scope tab 6 In the Remote IP address section choose the radio button next to These IP addresses 7 Choose Add 8 In the IP Address window in the This IP address or subnet field enter the Elastic IP add ress of the Adatum FS Proxy from Line 10 of the Important values worksheet 9 Choose OK 10 In the right hand pane under Actions choose Filter by Group and select Filter by World Wide Web Services (HTTP) 11 In the center pane right click on the World Wide Web Services (HTTP Traffic In) rule and choose Disable Rule This blocks all HTTP traffic into this machine Machine 4: Adatum FS proxy Access instance using remote desktop connection The EC2 Request Instances Wizard all ows the creation of security groups with the most popular allowed connections For custom permissions you can use the Security Groups facility in the EC2 Console 1 Choose Start > All Programs > Accessories > Communication > Remote Desktop Connection 2 In the Computer text box enter the Public DNS name for the machine shown in the EC2 Console (for example ec212345678910compute 1amazonawscom ) 3 Choose Connect 4 In the login dialog box that appears enter Administrator for user name and the passwor d you set for the Adatum Web Server (recorded on Line 9 of the Important values worksheet ) 5 Choose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 39 Create client authentication certificate 1 Open Internet Explorer and go to https://fs1corpadatumcom/certsrv/ 2 At the login prompt log in as administrator with the password from Line 2 of the Important values worksheet to reach the Active Directory Certificate Services home page 3 Choose the link to Request a certificate 4 Choose the link for advanced certificate request 5 Choose the link to Create and submit a request to this CA 6 On the Advanced Certificate Request page in the Certificate Template dropdown choose Adatum Proxy Client Auth 7 In the Identifying Information section in the Name field enter Adatum Proxy Client Auth and leave the other fields blank 8 In the Additional Options section in the Friendly Name field enter proxy client auth 9 Choose Submit 10 Choose Yes to complete the request process; the certificate will be issued automatically 11 Choose the link to Install this certificate 12 Choose Yes on the warning dialog 13 Leave the AD CS web application open for upcoming steps 14 In Console 1 choose Certificates (Current User)/Personal/Certificates The certificate for Adatum Proxy Client Auth should appear in the right hand pane Move client authentication certificate to local computer certif icate store 1 In Console 1 right click on the Adatum Proxy Client Auth certificate and choose All Tasks > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Provide a password 5 On the File to Export page choose Browse and then choose Desktop Step by Step: Single Sign on to Ama zon EC2 Based NET Applications from an On Premises Windows Domain 40 6 In the File name field enter adatum proxy client auth 7 Choose Save > Next > Finish > OK to complete the export process 8 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard 9 On the File to Import page choose Browse and find adatum proxy authpfx client on the desktop 10 Choose Open 11 Choose Next 12 Enter the password 13 Choose Next > Next > Finish > OK to complete the import process Create server authentication certificate Here you will request an SSL certificate with a name that exactly matches the internal corporate federation server This is by des ign and allows the proxy server to receive requests on behalf of the federation server 1 Back in Internet Explorer choose Home in the upper right corner of the Certificate Services web application 2 Choose the link to Request a certificate 3 Choose the li nk for advanced certificate request 4 Choose the link to Create and submit a request to this CA 5 On the Advanced Certificate Request page in the Certificate Template dropdown choose Extranet Web Server 6 In the Identifying Information section in the Nam e field enter fs1corpadatumcom and leave the other fields blank 7 In the Additional Options section in the Friendly Name field enter adatum proxy web ssl 8 Choose Submit 9 Choose Yes to complete the request process; the certificate will be issued automatically 10 Choose the link to Install this certificate 11 Choose Yes on the warning dialog Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 41 12 In Console 1 choose Certificates (Current User)/Personal/Certificates 13 The certificate for fs1corpadatumcom should be in the right hand pane ; rightclick and select Refresh if necessary Move server authentication certificate to local computer certificate store 1 In Console 1 right click on the fs1corpadatumcom certificate and choose All Task s > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Enter the password 5 On the File to Export page choose Browse then choose on Desktop 6 In the File name field enter adatum proxy web ssl 7 Choose Save > Next > Finish > OK to complete the export process 8 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard 9 On the File to Import page choose Browse and find adatum proxy web ssl pfx on the desktop 10 Choose Open 11 Choose Next 12 Enter the password 13 Choose Next > Next > Finish > OK to complete the import process Install AD FS F ederation Server proxy 1 In Server Manager right click on Roles and choose Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Federation Services 3 On the Select Role Services page check the box next to Federation Service Proxy 4 On the Choose a Server Authentication Certificate page highlight the existing certificate issued to fs1corpadatumcom Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 42 5 Choose Next 6 On the Specify Federation Server page enter fs1corpadatumcom 7 Choose Validate to check accessibility 8 Choose Next 9 On the Choose a Client Authentication Certificate page highlight the existing certificate issued to Adatum Proxy Client Auth 10 Choose Next 11 Choose Install 12 Choose Close to complete the install Create Adatum CRL website 1 Choose Start > Administrative Tools > Internet Information Services (IIS) Manager 2 Right click on the Sites folder in the left navigation area and choose Add Web Site 3 In the Site name field enter CRL 4 In the Content Directory section c hoose the button to the right of the Physical path field 5 Browse to c:\inetpub\ 6 Choose the Make New Folder button 7 Name the new folder CRL 8 Choose OK 9 In the Binding section in the Host name field enter crladatumcom 10 Choose OK Enable double escaping for CRL website in IIS This task pertains both to Windows Server 2008 and Windows Server 2008 R2 As in Scenario 1 IIS default request filtering behavior must be modified to allow Adatum’s Delta CRL files to be properly served to cli ents In this case we are creating the website ourselves – so you must take this step in either Windows Server 2008 or Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 43 Windows Server 2008 R2 The steps used to make the modification vary by operating system Windows Server 2008 (either local or running o n EC2) 11 Choose Start > Run 12 In the Run box enter cmd 13 Choose OK to open a command prompt 14 Change the directory to c:\windows\system32\inetsrv 15 At the command prompt enter the following and then press Enter : appcmd set config “CRL” section:systemwebS erver/security/requestFiltering allowDoubleEscaping:true Note : In Windows Server 2008 this process adds a webconfig file to the CRL physical folder ( c:\inetpub\CRL) Take care to not accidentally delete this file as CRL checking will fail without it Windows Server 2008 R2 (local only – not available in EC2) 1 Choose Start > Administrative Tools > Internet Information Services (IIS) Manager 2 In the left navigation area under Sites choose the CRL web site 3 In the center pane of the console in the IIS section double click on Request Filtering in Features View 4 In the right hand pane choose Edit Feature Settings 5 In the General section of the Edit Request Filtering Settings dialog box check the box next t o Allow double escaping 6 Choose OK Share access to CRL website folder 1 In IIS Manager right click on the CRL web site under Sites and choose Edit Permissions Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 44 2 In the CRL Properties window on the Sharing tab choose the Share button 3 In the File Sharing window choose the Share button 4 In the Network Discovery prompt select No do not turn on network discovery 5 Choose Done 6 Choose Close Machine 3: Adatum web server Create new corpadatumcom DNS zone The Adatum federation server endpoint URL i s https://fs1corpadatumcom/adfs/ls/ The web server gets this URL from the federation server’s trust policy at regular intervals and redirects client browsers to this URL to acquire security tokens Domain joined clients who have access to the corpda tumcom domain and DNS zone have no trouble (a) resolving this address or (b) accessing this server External clients however would not be able to resolve this name or access this server since they cannot access the internal Adatum domain The server access solution is to employ the FS proxy to handle external client requests and route requests through to the internal federation server However this does not fix the DNS resolution problem By creating a corpadatumcom internet DNS zone external cli ents can resolve the federation server endpoint URL The zone includes only one host entry resolving the endpoint URL to the IP address of the FS proxy sitting outside the firewall Domain joined clients will continue to use the corporate corpadatumcom DNS zone to access the federation server directly 1 Choose Start > Administrative Tools > DNS 2 In the left navigation area right click on the Forward Lookup Zones folder and select New Zone to start the New Zone Wizard 3 On the Zone Type page leave the d efault setting of Primary zone 4 On the Zone Name page enter corpadatumcom in the text box and c hoose Next 5 Accept the defaults on the Zone File and Dynamic Updates pages 6 Choose Finish Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 45 7 Under Forward Lookup Zones right click on corpadatumcom and choose New Host (A or AAAA) 8 In the New Host Name field enter fs1 9 In the IP address field enter the Elastic IP address for the Adatum FS Proxy from Line 10 of the Important values worksheet 10 Choose Add Host > OK > Done Add DNS record for CRL website 1 Under Forward Lookup Zones right click on adatumcom and choose New Host (A or AAAA) 2 In the New Host Name field enter crl 3 In the IP address field enter the Elastic IP address for the Adatum FS Proxy from Line 10 of the Important values worksheet 4 Choose Add Host > OK > Done Point DNS client to local DNS server The web server will use DNS to resolve the IP address of crladatum com Note that the DNS entry for fs1corpadatumcom (which points to the FS proxy) will not be used by this machine Instead the hosts file entry (which points to the actual federation server) will take precedence 5 Choose Start > Control Panel > Network and Sharing Center > Manage Network Connections 6 Right click on Local Area Connection and choose Properties 7 Double click on the Internet Protocol Version 4 list item to open TCP/IPv4 Properties 8 On the General tab choose the radio button to Use the following DNS server addresses 9 In the Preferred DNS server field enter 127001 10 Choose OK twice Modify firewall settings 1 In the EC2 Console choose Security Groups in the left navigation area 2 Choose the Adatum Web Server security group to display its current settings Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 46 3 Choose the Remove button next to the current HTTPS settings Add the following: Method Protocol From Port To Port Source (IP or Group) HTTPS TCP 443 443 0000/0 Machine 1: Adatum internal server Add FS proxy client authentication certificate to federation server policy The federation server needs to register the public key for the client authentication certificate being used by the FS proxy in order to verify the signature on proxy communications 1 Open Console 1 on the desktop 2 Choose Certification Authority/Adatum Certificate Server/Issued Certificates 3 In the center pane double click on the issued certificate that used the Adatum Proxy Client Auth certificate template to open it 4 On the Details tab choose the Copy to file button to start the Certificate Export Wizard 5 On the Export File Format page leave the default setting 6 On the File to Export page choose Browse 7 Choose Desktop 8 In the File name field enter adatum proxy client auth public 9 Choose Save > Next > Finish > OK > OK to save adatum proxy client auth publiccer to the desktop 10 Choose Start > Administrative Tools > Active Directory Federation Services 11 Right click on Trust Policy under Federation Service and choose Properties 12 On the FSP Certificates tab choose Add 13 Choose the adatum proxy client auth publiccer file from the desktop 14 Choose Open > OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 47 Create scheduled task for automatic CRL file synchronization 1 In Console 1 right click on Certification Authority/Adatum Certificate Server and choose Properties 2 On the Auditing tab in the Events to audit list check the box next to Revoke certificates and publish CRLs 3 Click OK 4 Click Start > Administrative Tools > Task Scheduler 5 On the Actions menu choose Create task 6 On the General tab in the Name field enter publishcrl 7 In the Security Options section choose Run whether user is logged on or not 8 On the Triggers tab choose New 9 In the New Trigger dialog box in the Begin the task dropdown choose On an event 10 In the Settings area in the Log dropdown choose Security 11 In the Source dropdown choose Microsoft Windows security auditing 12 In the Event ID field enter 4872 13 Choose OK 14 On the Actions tab choose New 15 In the New Action dialog box in the Action dropdown leave Start a program 16 In the Program/script text box enter robo copy 17 In the Add arguments text box enter the following: c:\windows\system32 \certsrv\certenroll \\fsproxyelasticIP \crl Note : For fsproxyelasticIP use the Elastic IP address for the Adatum FS Proxy from Line 10 of the Important values worksheet 18 Choose OK twice 19 Enter your domain administrator password Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 48 20 Choose OK to complete the task scheduling process 21 In Console 1 right click on Certification Authority/Adatum Certificate Server/Revoked Certificates and choose All Tasks > Publish 22 Choose OK to publish a new CRL 23 Check for success of the scheduled task by viewing the folder on the FS proxy for the CRL application ( c:\Inetpub\CRL\) looking for the files such as Adatum Certificate Servercrl and Adatum Certificate Server+crl Machine 5: external client Change preferred DNS server 1 Choose Start > Control Panel > Network and Sharing Center > Manage Network Connections 2 Right click on an adapter with internet connectivity and choose Properties 3 Double click on Internet Protocol Version 4 (TCP/IPv4) to open TCP/IPv4 properties 4 On the General tab choose the radio button to Use the following DNS server addresses 5 In the Preferred DNS server field enter the Elastic IP address for the Adatum Web Server from Line 8 of the Important values worksheet 6 Choose OK twice Test 1 To test the scenario open Internet Explorer on the External Client computer enter https://adfsv1appadatumcom into the address bar 2 Choose Enter Note that instead of silent authentication you are presented with forms based authentication asking for our domain credentials 3 Log in as alansh using the password f rom Line 4 of the Important values worksheet This allows the federation server and federation server proxy to create the required security token Step by S tep: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 49 4 Because you did not add the Adatum root CA certificate to this computer’s certificate store you must choose Continue to this website on each of the certificate related security alerts that appear in the browser Using server authentication certificates rooted at a 3rd party distributed in Windows operating systems would eliminate th ese errors If you are running into errors it’s possible that you are having certificate verification issues See Appendix B for more information Scenario 3: Service provider application In the next two scenarios Alan Shen will access an EC2 based federated claims aware application owned and operated by a partner organization called Trey Research Trey Research will use AD FS to provide access to Adatum employees leveraging their existing Adatum domain credentials In Scenario 3 Alan Shen will access the Trey Research federated application from both a domain joined client (contacting the Adatum federation server directly) and an external client (through the Adatum FS proxy) Trey Research will operate an AD FS federatio n server in EC2 giving it the ability to receive and interpret security tokens and grant access to multiple partners like Adatum simultaneously The scenario adds two additional computers to the lab 1 Trey Research Federation Server This EC2 based machine will consume incoming security tokens from Adatum users and generate outgoing security tokens for the Trey Research federated application’s web server Specifically this machine will run: a Active Directory Domain Services (domain controller) b Domai n Name Services (Active Directory integrated DNS server) c Active Directory Certificate Services (root CA) d Internet Information Services (web server) e Microsoft ASPNET 20 f Microsoft NET Framework 20 g Active Directory Federation Services (Trey Research resou rce partner) Step by Step: Single Sign on to Amazon EC2 Based NET Applicati ons from an On Premises Windows Domain 50 The AD FS v1 federation server is available in Windows Server 2003 R2 Windows Server 2008 and Windows Server 2008 R2 (Enterprise Edition or above) Amazon EC2 currently offers Windows Server 2003 R2 and Windows Server 2008 (Datacenter Editio n) as guest operating systems This lab used Windows Server 2008 2 Trey Research Web Server This EC2 based machine will host the AD FS web agent and the Trey Research federated web application Specifically this machine will run: a Internet Information Servi ces (web server) b Microsoft ASPNET 20 c Microsoft NET Framework 20 d AD FS v1 claims aware web agent e Sample application The AD FS v1 web agent is available in Windows Server 2003 R2 Windows Server 2008 and Windows Server 2008 R2 (Standard Edition or above) Amazon EC2 currently offers Windows Server 2003 R2 and Windows Server 2008 (Datacenter Edition) as guest operating systems This lab used Windows Server 2008 Configuration Machine 1: Adatum internal server Export Adatum AD FS policy file 3 Choose Start > Administrative Tools > Active Directory Federation Services 4 Right click on Federation Service/Trust Policy in the left navigation area and choose Export Basic Partner Policy 5 Choose Browse and save the file to the desktop with the name adatumpolicyxml 6 Choose OK 7 Load the file to a web based storage solution like Microsoft OneDrive Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 51 Machine 6: Trey Research Federation Server Windows Server instance in EC2 1 In the EC2 Console choose Instances in the left navigation area 2 Choose the Launch Instances button to launch the Request Instances Wizard 3 Choose the Community AMIs tab and in the adjacent text box enter amazon/Windows Server2008 4 Find the entry for amazon/Windows Server2008 i386Base<version#> and choose the Select button to its right 5 On the Instance Details page leave the defaults selected 6 On the Advanced Instance Details page accept the default settings 7 On the Create Key Pair page leave the default to use your existing key pair 8 On the Configure Firewall page choose Create a New Security Group 9 Name the new group Trey Federation Server 10 Choose the Select dropdown and add the following allowed connections: Applicatio n Transport Port Source Network/CIDR RDP TCP 3389 Lab management external IP/321 * HTTPS TCP 443 All internet *This is the external IP address of the machine being used to access the Amazon EC2 images via Remote Desktop recorded on Line 1 of the Important values worksheet 11 Choose Continue 12 In the Review page choose Launch to start the instance 13 Choose Close 14 Choose Instances in the left navigation bar to see the status of your instance Associate an Elastic IP address 1 In the EC2 Console choose on the Elastic IPs link in the left navigation area Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 52 2 Choose the Allocate New Add ress button 3 Choose the Yes Allocate button 4 Once allocated right click on the address and select Associate Address 5 Choose the Trey Federation Server instance ID from the dropdown 6 Choose Associate 7 Record the Trey Research Federation Server E 8 Elastic IP address on Line 11 of the Important values worksheet Get Windows administrator password 1 In the EC2 Console choose Instances in the left navigation area 2 Once the Status shows as “running” and your Elastic IP address is listed in the Public DNS column right click on the Trey Federation Server instance and choose Get Windows Password 3 On your desktop open the ADFDSkeyPEM file with Notepad and copy the entire contents of the file (including the Begin and End lines such as : "BEGIN RSA PRIVATE KEY ") 4 In the EC2 Console paste the text into the Retrieve Default Windows Administrator Password window 5 Click inside the text box once to enable the Decrypt Password button 6 Choose Decrypt Password 7 Copy the Computer User and Decrypted Password information into a text file and save to your desktop 8 Choose Close in the Retrieve Password window Access instance using remote desktop connection 1 Choose Start > All Programs > Accessories > Communication > Remote Desktop Connection 2 In the Computer text box copy/paste or type the Computer Name from your text file (for example ec2123 45678910compute 1amazonawscom ) 3 Choose Connect 4 In the login dialog box that appears enter Administrator for user name Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 53 5 Enter the Decrypted Password from your text file into the Password field taking care to get capitalization correct 6 Choose OK 7 In the Set Network Location window choose Public Location 8 Choose Close Optional 9 Once inside the instance change the Administrator password by choosing CTRL ALTEND and choosing the Change a password link 10 Record the Trey Research Federation Server administrator password on Line 12 of the Important values worksheet Optional 1 Turn off the Internet Explorer Enhanced Security Configuration for administrators 2 In Server Manager on the Server Summary page under Security Information choose Configure IE ESC 3 Under Administrators choose the Off radio button 4 Choose OK Initial configuration 1 Click Start > All Programs > Ec2ConfigService Settings 2 On the General tab uncheck the box next to Set Computer Name 3 Choose OK 4 In Server Manager on the Server Summary page under Computer Information choose Change System Properties 5 On the Computer Name tab choose the Change button 6 In the Computer Name field enter fs1 then choose OK 7 Choose OK twice 8 Choose Close 9 Choose Restart Now 10 Using Remote Desktop log back into the machine with the Administrator account and password from Line 12 of the Important values worksheet Step by Step: Single Si gnon to Amazon EC2 Based NET Applications from an On Premises Windows Domain 54 Adjust clock settings 1 Right click on the Windows Taskbar and choose Properties 2 On the Notification Area tab check the box to show the Clock 3 Choose OK 4 Right click over the clock in the taskbar and choose Adjust Date/Time 5 On the Date and Time tab choose the Change time zone button and adjust to your time zone 6 Choose OK twice Install /configure Active Directory Domain Services (AD DS) Although this federation server will not be authenticating users AD FS v1 federation server computers must be members of a domain Therefore this machine will run Active Directory Domain Services even t hough the directory will contain no users and the domain will have no other member machines 1 In Server Manager right click on Roles and choose Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Domain Services 3 Choose Next twice 4 Choose Install 5 On the Installation Results page choose the link for the Active Directory Domain Services Installation Wizard ( dcpromoexe ) 6 On the Choose a Deployment Con figuration page choose Create a new domain in a new forest 7 On the Name the Forest Root Domain page enter treyresearchnet 8 On the Set Forest Functional Level and Set Domain Functional Level pages leave the default setting of Windows 2000 9 On the Additional Domain Controller Options page leave DNS Server checked 10 On the warning about static IP addresses choose Yes the computer will use a dynamically assigned IP address Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 55 11 When prompted about not finding an authoritative DNS zone choose Yes to continue 12 Complete the wizard keeping all other default values 13 When prompted restart computer 14 Using Remote Desktop log back into the machine with the TREYRESEARCH \administrator account and the password from Line 12 of the Important values worksheet Add DNS forwarder from Trey research domain DNS to internet DNS This is required so that the federation server can resolve the Adatum CRL location DNS name 1 Choose Start > Administrative Tools > DNS 2 Choose FS1 in the le ft navigation area 3 Rightclick on Forwarders in the right hand pane and choose Properties 4 On the Forwarders tab c hoose Edit 5 In the Click here to add an IP address or DNS name field enter the Adatum Web Server Elastic IP address from Line 8 of the Important values worksheet 6 Press Enter 7 Highlight any other forwarders previously listed and c hoose Down to make your new forwarder is the first one listed 8 Choose OK twice Install /Configure Active Directory Certificate Services (AD CS) 1 In Server Manager right click on Roles and select Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Certificate Services 3 On the Select Role Services page choose Certification Authority and Certification Authority Web Enrollment 4 Choose the Add Required Features button to allow Server Manager to add IIS to the installation process 5 On the Specify Setup Type page choose Enterprise Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 56 6 On the Specify CA Type page choose Root CA 7 On the Setup Private Key page choose Create a new private key and accept the default cryptography settings 8 On the Configure CA Name page in the Common Name for this CA field enter Trey Certificate Server 9 Complet e the wizard keeping all other default values 10 Choose Close to finish the install 11 Choose Start > Run 12 In the Run box enter mmc 13 Choose OK to start the Microsoft Management Console 14 In the File menu choose Add/Remove Snap in 15 Highlight the Certificates snap in and c hoose the Add button 16 Choose computer account and local computer in the pages that follow 17 Highlight the Certificate Templates snap in and c hoose Add 18 Highlight the Certification Authority snap in and c hoose Add 19 Choose local c omputer in the page that follows 20 Choose OK 21 Choose File > Save and save the new MMC console (Console 1) to the machine desktop for future use Enable double escaping for CRL website in IIS 1 Choose Start > Run 2 In the Run box enter cmd 3 Choose OK to open a command prompt 4 Change the directory to c:\windows\system32\inetsrv 5 At the command prompt enter the following and press Enter : appcmd set config “Default Web Site/CertEnroll” section:systemwebServer/security/requestFiltering allowDouble Escaping:true Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 57 Configure AD CS certificate templates 1 In Console 1 choose Certificate Templates in the left navigation area 2 In the center pane right click on the Web Server certificate template and choose Duplicate Template 3 In the Duplicate Template dialog leave Windows Server 2003 Enterprise as the minimum CA for the new template 4 Choose OK 5 In Properties of New Template make the following changes: a On the General tab in the Template display name field enter Extranet Web Server b On the Request Handling tab check the box next to Allow private key to be exported 6 Choose OK to create the new template 7 In the center pane right click on the Web Server certificate template and choose Properties 8 In the Security tab c hoose Add 9 In the objec t names text box enter Domain Controllers 10 Choose Check Names 11 Once verified c hoose OK 12 Back in the Security tab highlight the Domain Controllers list item 13 In the Allow column check the Read and Enroll permissions 14 Choose OK 15 Choose Start > Adminis trative Tools > Services 16 Right click on Active Directory Certificate Services and select Restart 17 In Console 1 in the left navigation area right click on Certificate Authority \Trey Certificate Server \Certificate Templates and select New > Certificate Template to Issue 18 Highlight Extranet Web Server from the list 19 Choose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 58 Create server authentication certificate 1 In Console 1 right click on Certificates (Local Computer)/Personal/Certificates and choose All Tasks > Request New Certificat e 2 In the Certificate Enrollment Wizard choose Next 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown and select Common name b In the Value field enter fs1treyresearchnet c Choose Add 5 On the General tab in the Friendly name text box enter trey fs ssl 6 Choose OK 7 In the Certificate Enrollment window check the box next to Web Server 8 Choose the Enroll button 9 Choose Finish 10 In Console 1 check for the new certificate with friendly name trey fs ssl in Certificates (Local Computer)/Personal/Certificates Create AD FS token signing certificate 1 In Console 1 right click on Certificates (Local Computer)/Personal/Certificates and sele ct All Tasks > Request New Certificate 2 In the Certificate Enrollment Wizard choose Next 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown and select Common name b In the Value field enter Trey Token Signing Cert1 c Choose Add Step by Step: Single Sign on to Amazo n EC2 Based NET Applications from an On Premises Windows Domain 59 5 On the General tab in the Friendly name text box enter trey ts1 6 Choose OK 7 In the Certificate Enrollment window c hoose the box next to Web Server 8 Choos e the Enroll button 9 Choose Finish 10 In Console 1 check for the new certificate with friendly name “ trey ts1 ” in Certificates (Local Computer)/Personal/Certificates Add Adatum Root CA certificate The Trey Research federation server needs the root CA certificate for Adatum in order to perform token signing certificate CRL verification 1 Open Internet Explorer and in the address bar enter http://crladatumcom/fs 1corpadatumcom_Adatum%20Certificate%20Servercrt 2 In the File Download – Security Warning box choose Save and save the file to the desktop 3 Choose Close 4 In Console 1 right click on Certificates (Local Computer)/Trusted Root Certification Aut horities/Certificates and choose All Tasks > Import to launch the Certificate Import Wizard 5 On the File to Import page c hoose Browse find the Adatum root CA certificate file on the desktop and c hoose Open 6 Choose Next > Next > Finish > OK to complete the import process Install Active Directory Federation Services (AD FS) 1 In Server Manager right click on Roles and select Add Roles to start the Add Roles Wizard 2 On the Select Server Roles page check the box next to Active Directory Federation Services 3 On the Select Role Services page check the box next to Federation Service 4 Click the Add Required Role Services button to allow Server Manager to add IIS features to the installation process Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Wind ows Domain 60 5 Choose Next 6 On the Choose a Server Authentication Certificate page highlight the existing certificate issued to fs1treyresearchnet with the intended purpose Server Authen tication 7 Choose Next 8 On the Choose a Token Signing Certificate page highlight the existing certificate issued to Trey Token Signing Cert1 9 Choose Next 10 Accept all other defaults and c hoose Install Initial AD FS configuration 1 Click Start > Administrative Tools > Active Directory Federation Services 2 Right click on Account Stores under Federation Service/Trust Policy/My Organization and choose New > Account Store 3 In the Add Account Store Wizard leave AD DS as the store type and click through to add the local AD domain 4 Right click on My Organization/Organization Claims and choose New > Organization claim 5 In the Claim name field enter GoldUsers 6 Choose OK Export Trey Research AD FS policy file 1 Choose Start > Administrative Tools > Activ e Directory Federation Services 2 Right click on Federation Service/Trust Policy in the left navigation area and choose Export Basic Partner Policy 3 Choose Browse and save the file to the desktop with the name adatumpolicyxml 4 Choose OK 5 Load the file to a web based storage solution like OneDrive Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 61 Machine 7: Trey Research Web Server Create new instance from Webserver2 AMI One could use the existi ng Adatum Web Server to host the Trey Research federated application However since each application requires SSL server authentication certificates with different DNS suffixes ( adatumcom treyresearchnet ) and EC2 does not offer multiple IP addresses pe r single machine instance using the same server would require either: • Using a multi domain certificate (which AD CS does not issue) or • Using a port other than 443 for SSL communication with one of the applications (which can cause trouble when clients are limited to 443 only for HTTPS) Therefore this lab uses dedicated web servers for each organization and port 443 exclusively 1 In the EC2 Console choose the AMIs link in the left navigation area 2 Right click on the webserver2 AMI and choose Launch Instance to start the Request Instances Wizard 3 On the Instance Details page leave the defaults selected 4 On the Advanced Instance Details page accept the default settings 5 On the Create Key Pair page leave the default to use your existing key p air 6 On the Configure Firewall page choose Create a New Security Group 7 Name the new group Trey Web Server 8 Choose the Select dropdown and add the following allowed connections: Application Transport Port Source Network/CIDR RDP TCP 3389 Lab management external IP/32* HTTPS TCP 443 All internet *This is the external IP address of the machine being used to access the Amazon EC2 images via Remote Desktop recorded on Line 1 of the Important values worksheet 9 Choose Continue Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 62 10 In the Review page choose Launch to start the instance 11 Choose Close 12 Choose Instances in the left navigation bar to see the status of your instance Associate an Elastic IP address 1 In the EC2 Console choose the Elastic IPs link in the left navigation area 2 Choose the Allocate New Address button 3 Choose the Yes Allocate button 4 Once allocated right click on the address and choose Associate Address 5 Choose the Trey Web Server instance ID from the dropdown and c hoose Associate 6 Record the Trey Research Web Server Elastic IP addres s on Line 13 of the Important values worksheet Access instance using remote desktop connection 1 Choose Start > All Programs > Accessories > Communication > Remote Desktop Connection 2 In the Computer text box enter the Public DNS name for the machine shown in the EC2 Console (for example ec212345678910compute 1amazonawscom ) 3 Choose Connect 4 In the login dialog box that appears enter Administrator for user name and the password you set for the Adatum Web Server (recorded on Line 9 of the Important values worksheet ) 5 Choose OK Add record for Trey Federation Server to hosts file 1 Double click the shortcut on the desktop for the hosts file 2 Choose Notepad 3 Choose OK 4 Add the name and external IP address of the Trey Federation Server from Line 11 of the Important values worksheet as shown in the following example: Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 63 12345678910 fs1treyresearchnet 5 Save and close the file Install Trey Research root CA certificate 1 Open Internet Explorer and go to https://fs1treyresearchnet/certsrv/ 2 In the Certificate Error page c hoose the link to Continue to this website 3 At the login prompt log in as administrator with the password from Line 12 of the Important values worksheet to reach the Active Directory Certificate Services home page 4 At the bottom of the page c hoose the link to Download a CA certificate certificate chain or CRL 5 On the next page c hoose the link to Download CA certificate 6 Save the resulting certnewcer file to the desktop 7 Choose Yes to overwrite the previous one there Leave the AD CS web application open for use in upcoming steps 8 In Console 1 right click on Certificates (Local Computer)/Trusted Root Certification Authorities/Certificates and choose All Tasks > Import to launch the Certificate Import Wizard 9 On the File to Import page c hoose Browse 10 Find the certnewcer file on the desktop and choose Open 11 Choose Next twice 12 Choose Finish 13 Choose OK to complete the import process Create server authentication certificate 1 Back in Internet Explorer choose Home in the upper right corner of the Certificate Services web application 2 Choose the link to Request a certificate 3 Choose the link for advanced certificate request 4 Choose the link to Create and submit a request to thi s CA 5 If prompted about the page requiring HTTPS choose OK Step by Step: Single Sign on to Amazon EC2 Based N ET Applications from an On Premises Windows Domain 64 6 If prompted to run the Certificate Enrollment Control addon choose Run 7 On the Advanced Certificate Request page in the Certificate Template dropdown select Extranet Web Server 8 In the Identifying Information section in the Name field enter adfsv1apptreyresearchnet and leave the other fields blank 9 In the Additional Options section in the Friendly Name field enter trey web ssl 10 Choose Submit 11 Choose Yes to complete the request process; the certificate will be issued automatically 12 Choose the link to Install this certificate 13 Choose Yes on the warning dialog 14 In Console 1 choose Certificates (Current User)/Personal/Certificates 15 The certificate for adfsv1apptreyresear chnet should appear in the right hand pane Move server authentication certificate to local computer certificate store 1 In Console 1 right click on the adfsv1appadatumcom certificate and choose All Tasks > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Provide the password 5 On the File to Export page choose Browse > Desktop and in the File name field enter trey web ssl 6 Choose Save > Next > Finish > OK to complete the export process 7 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard 8 On the File to Import page c hoose Browse and find trey web sslpfx on the desktop 9 Choose Open 10 Choose Next Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 65 11 Enter the password 12 Choose Next > Next > Finish > OK to complete the import process Edit sample application The sample application (which is already on this machine from the original machine image) needs to be changed from belonging to Adatum to Trey Research 1 Choose Start > Administrative Tools > Internet Information Services (IIS) Manager 2 In the Sites folder right click on ADFSv1 app and choose Edit Bind ings 3 Highlight the HTTPS entry and then c hoose the Edit button 4 In the SSL Certificate dropdown choose trey web ssl 5 Choose OK 6 Choose Close 7 In the application properties window make the following changes: a Right click on the ADFSv1 app website and choose Explore b Right click on defaultaspx (not defaultaspxcs) and choose Edit c On the Edit menu choose Replace d Enter Adatum in the Find what field e Enter Trey Research in the Replace with field f Choose Replace All g Close the Replace tool h Save and close defaultaspx 8 Right click on webconfig and choose Edit 9 In the <websso> section replace the current <returnurl> entry with <returnurl>https://adfsv1apptreyresearchnet/</returnurl> 10 Replace the current <fs> entry with <fs>https://fs1treyresearchnet/adfs/fs/federationserverservi ceasmx</fs> 11 Save and close webconfig Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 66 Machine 3: Adatum web server Add Treyresearchnet zone and records to internet DNS 1 Click Start > Administrative Tools > DNS 2 In the left navigation area right click on the Forward Lookup Zones folder and choose New Zone to start the New Zone Wizard 3 On the Zone Type page leave the default setting of Primary zone 4 On the Zone Name page enter treyresearchnet in the text box 5 Choose Next 6 Accept the defaults on the Zone File and Dynamic Updates pages 7 Choose Finish 8 Under Forward Lookup Zones right click on treyresearchnet and choose New Host (A or AAAA) 9 In the New Host Name field enter fs1 10 In the IP address field enter the Elastic IP address for the Trey Research Federation Server from Line 11 of the Important values worksheet 11 Choose Add Host > OK 12 In the New Host Name field enter adfsv1app 13 In the IP address field enter the Elastic IP address for the Trey Research Web Server from Line 13 of the Important Values Worksheet 14 Choose Add Host > OK > Done Machine 1: Adatum internal server Add Trey Research as a resource partner 1 Download the treypolicyxml file you created on the Trey Research Federation Server earlier from your preferred internet based storage solution 2 Save treypolicyxml to your desktop 3 Choose Start > Administrative Tools > Active Directory Federation Services Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 67 4 Right click on Federation Service/Trust Policy/Partner Organizations/Resource Partners and choose New > Resource Partner to start the Add Resource Partner Wizard 5 On the Import Policy File page c hoose Yes 6 Browse to treypolicyxml and c hoose Open 7 Choose Next 8 On the Resource Partner Details page change the Display name to Trey Research 9 Choose Next 10 In the Federation Scenario page leave Federated Web SSO selected 11 In the Account Partner Identity Claims page leave the UPN and Email claims selected 12 In the Select UPN Suffix page leave the default pass through all UPN suffixes unchanged selected 13 In the Select E mail Suffix page leave the default pass through all E mail suffixes unchanged selected 14 Choose Next > Finish to complete the wizard 15 Right click on Partner Organizations/Resource Partners/Trey Research and select New > Outgoing Group Claim Mapping 16 Leave PriorityUsers as the Organization Group Claim 17 In the Outgoing group claim name field enter CliamInTransit 18 Choose OK Add Trey Research root CA certificate to end user desktops with group policy To avoid SSL certificate warnings client desktops need to trust the SSL certificates used by Trey Research at the application and federation serv er 1 Open Internet Explorer and in the address bar enter https://fs1treyresearchnet/certenroll/fs1treyresearchnet_Trey%20Certificate%2 0Servercrt 2 In the Certificate Error page c hoose the link to Continue to this website Step by Ste p: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 68 3 In the File Download – Security Warning box choose Save and save the file to the desktop 4 Choose Close 5 Choose Start > Administrative Tools > Group Policy Management 6 Right click on Forest:corpadatumcom/Domains/corpadatumcom/Default Domain Policy and choose Edit 7 Under Computer Configuration/Policies/Windows Settings/Security Settings/Public Key Policies right click on Trusted Root Certification Authorities and choose Import to start the Certificate Import Wizard 8 On the File to Import page c hoose Browse and select the Trey root CA certificate you just downloaded from the desktop 9 Choose Open 10 Choose Next > Next > Finish > OK to complete the import process In this lab domai nwide Group Policy updating results in the Adatum Internal Server also getting the Trey root CA installed However this isn’t a requirement Machine 6: Trey Research Federation server Add sample application to AD FS 1 Choose Start > Administrative Tools > Active Directory Federation Services 2 Right click on Applications under Federation Service/Trust Policy/My Organization and choose New > Application 3 Enter the following in the Add Application Wizard : a On the Application Type page leave Claims aware application as the application type b On the Application Details page in the Application display name field enter ADFSv1 app c In the Application URL field enter https://adfsv1apptreyresearchnet/ d On the Accepted Identity Claims page check the box next to User principal name (UPN) and Email Step by Step: Single Sign on to Amazon EC2 Based NET Application s from an On Premises Windows Domain 69 e Choose Next twice f Choose Finish 4 Choose ADFSv1 app under Applications 5 In the right hand window right click on the GoldUsers group claim and choose Enable Add Adatum as an account partner 1 Download the adatumpolicyxml file you created on the Adatum Internal Server from your preferred internet based storage solution 2 Save to your desktop 3 Right click on Federation Service/Trust Policy/Partner Organizations/Account Partners and choose New > Account Partner to start the Add Account Partner Wizard 4 On the Import Policy File page c hoose Yes 5 Browse to adatumpolicyxml and c hoose Open 6 Choose Next 7 On the Resource Partner Details page leave the default settings 8 On the Account Partner Verification Certificate page leave Use the verification certificate in the import policy file selected 9 On the Federation Scenario page leave Federated Web SSO selected 10 In the Account Partner Identity Claims page leave UPN and Email claims selected 11 In the Accepted UPN Suffixes page in the Add a new suffix field enter corpadatumcom 12 Choose Add 13 Choose Next 14 In the Accepted E mail Suffixes page in the Add a new suffix field enter adatumcom 15 Choose Add 16 Choose Next > Next > Finish to complete the wizard Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 70 17 Under Partner Organizations/Account Partners right click on Adatum and choose New > Incoming Group Claim Mapping 18 In the Incoming group claim name field enter ClaimInTransit 19 Leave GoldUsers as the Organization Group Claim 20 Choose OK Modify firewall settings The Trey Research web server needs to read CRL information from the Trey Research CA which is running on this machine Since CRLs cannot be accessed via HTTPS Port 80 must be opened (but can be scoped to only this web server) 1 In the Amazon EC2 Console choose Security Groups in the left navigation bar 2 Choose the Trey Federation Server row to display its current settings 3 In the lower pane add the fol lowing firewall permission and c hoose Save : Connection Method Protocol From Port To Port Source (IP or Group) HTTP TCP 80 80 Trey web server external IP/321 * *This is the Elastic IP address for the Trey Research Web Server from Line 13 of the Important values worksheet Machine 2: Domain joined client Update group policy settings 1 Choose Start 2 In the search field enter cmd and press Enter to open a command prompt 3 At the prompt enter gpupdate/force to ensure the Trey Research root CA certificate is installed on the client machine Test Before testing on either the domain joined or external client you should clear browser cookies to reiniti ate the complete federation process Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 71 1 In Internet Explorer c hoose Tools > Internet Options 2 On the General tab under Browsing history choose the Delete button 3 Make sure the box next to Cookies is checked and c hoose Delete 4 To test the scenario open Internet Explorer in the domain joined client enter https://adfsv1apptreyresearchnet/ in the address bar and choose Enter Note that the Trey Research federation server provides a home realm discovery service to redirect use rs without security tokens to the proper identity provider 5 In the dropdown choose Adatum because Alan Shen in an Adatum user Silent Integrated Windows Authentication ensures that the user is not asked for credentials when domain joined When the appli cation is shown scroll to the bottom of the page Note that the group claim “PriorityUsers” was transformed to “GoldUsers” by the federation servers Claim transformation allows for increased flexibility when sending claims to partner organizations 6 To fu rther test the scenario open Internet Explorer on the External Client computer enter https://adfsv1apptreyresearchnet/ into the address bar and press Enter Note that instead of silent authentication you are presented with forms based authentication asking for our domain credentials Log in as alansh using the password from Line 4 of the Important values worksheet If you are running into errors it’s possible that you are ha ving certificate verification issues See Appendix B for more information Scenario 4: Service provider application with added security This scenario is essentially the same as Scenario 3 with the difference being the additi on of an AD FS proxy to the Trey Research AD FS deployment in Amazon EC2 By using the AD FS proxy Trey Research can limit direct access to its federation server to only its web servers and the proxy server instead of allowing all inbound clients to acce ss the federation server Since the federation server issues security tokens used by the web servers it is a high value resource that should be protected Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 72 This scenario does not require any additional machines While earlier we used a separate machine for the Adatum FS proxy the proxy can be installed on the same machine as our Trey Research Web Server as long as the Default Web Site in IIS is available (which it is) However to enable hosting of multiple SSL websites on the same web server we will use a wildcard certificate and custom IIS configuration; this is discussed in detail below Configuration Machine 6: Trey Research federation server Create FS proxy client auth certificate template 1 In Console 1 choose Certificate Templates 2 In the center pane right click on the Computer certificate template and choose Duplicate Template 3 In the Duplicate Template dialog leave Windows Server 2003 Enterprise as the minimum CA for the new template 4 Choose OK 5 In Properties of New Template make the following changes: a On the General tab in the Template display name field enter Trey Proxy Client Auth b On the Request Handling tab check the box next to Allow private key to be exported c On the Subject Name tab c hoose the radio button next to S upply in the request Click OK in the warning about allowing user defined subject names with automatic issuance 6 Choose OK to create the new template 7 In Console 1 right click on the Certificate Authority \Trey Certificate Server\Certificate Templates folder and select New > Certificate Template to Issue 8 Highlight Trey Proxy Client Auth from the list 9 Choose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 73 Machine 7: Trey Research web server Create wildcard server authentication certificate Both the Trey Research FS proxy and the Trey Research sample application require SSL server authentication certificates It is generally not possible to support multiple SSL applications on the same web server unless the applications use different ports (which has its issues) or different IP address es (which isn’t possible in EC2) To overcome this limitation it is possible to use a single SSL certificate for multiple applications simultaneously – if that certificate supports multiple domains or if the certificate is a wildcard SSL certificate A w ildcard SSL certificate would be issued for example to *treyresearchnet and thus be appropriate for any applications using that DNS suffix In this lab we will use wildcard certificates in conjunction with host headers to run the FS proxy (fs1treyre searchnet ) and sample application (adfsv1apptreyresearchnet ) on the same web server The special configuration steps are not supported in the IIS Manager interface; instead we will use command line scripts as described by Microsoft here 1 Open Internet Explorer and go to https:// fs1treyresearchnet/certsrv / 2 At the login prompt log in as administrator with the password from Line 12 of the Important values worksheet to reach the Active Directory Certificate Services home page 3 Choose the link to Request a certificate 4 Choose the link for advanced certificate request 5 Choose the link to Create and submit a request to this CA 6 On the Advanced Certificate Request page in the Certificate Template dropdown choose Extranet Web Server 7 In the Identifying Information section in the Name field enter *treyresearchnet and leave the other fields blank 8 In the Additional Options section in the Friendly Name field enter trey wild ssl 9 Choose Submit 10 Choose Yes to complete the request process; the certificate will be issued automatically Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 74 11 Choose the link to Install this certifica te 12 Choose Yes on the warning dialog 13 In Console 1 choose Certificates (Current User)/Personal/Certificates • The certificate for *treyresearchnet should be in the right hand pane • Leave the AD CS web application open for use in upcoming steps Move wildcard certificate to local computer certificate store 1 In Console 1 right click on the *treyresearchnet certificate and choose All Tasks > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Provide a password 5 On the File to Export page choose Browse 6 Choose Desktop 7 In the File name field enter trey wild ssl 8 Choose Save > Next > Finish > OK to complete the export process 9 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard 10 On the File to Import page c hoose Browse and find trey wild sslpfx on the desktop 11 Choose Open 12 Choose Next 13 Enter the password 14 Choose Next > Next > Finish > OK to complete the import process Create client authentication certificate 1 Back in Internet Explorer c hoose Home in the upper right corner of the Certificate Services web application 2 Choose the link to Request a certificate 3 Choose the link for advanced certificate request Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 75 4 Choose the link to Create and submit a request to this CA 5 On the Advanced Certificate Request page in the Certificate Template dropdown choose Trey Proxy Client Auth If the template isn’t yet showing in the dropdown list you can speed the process by restarting the Active Directory Certificate Services service on the Trey Research Federation Server 6 In the Identifying Informati on section in the Name field enter Trey Proxy Client Auth and leave the other fields blank 7 In the Additional Options section in the Friendly Name field enter proxy client auth 8 Choose Submit 9 Choose Yes to complete the request process; the certific ate will be issued automatically 10 Choose the link to Install this certificate 11 Choose Yes on the warning dialog 12 In Console 1 choose Certificates (Current User)/Personal/Certificates 13 The certificate for Trey Proxy Client Auth should be in the right hand pane Move client authentication certificate to local computer certificate store 1 In Console 1 right click on the Trey Proxy Client Auth certificate and choose All Tasks > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Provide a password 5 On the File to Export page choose Browse 6 Choose Desktop 7 in the File name field enter trey proxy client auth 8 Choose Save > Next > Finish > OK to complete the export process 9 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import Wizard Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 76 10 On the File to Import page c hoose Browse and find trey proxy client authpfx on the desktop 11 Choose Open 12 Choose Next 13 Enter the password 14 Choose Next > Next > Finish > OK to complete the import process Install AD FS F ederation Server proxy 1 In Server Manager choose Roles in the left navigat ion area 2 In the right hand pane under Active Directory Federation Services choose the link to Add Role Services 3 On the Select Role Services page check the box next to Federation Service Proxy 4 On the Choose a Server Authentication Certificate page highlight the existing certificate issued to *treyresearchnet 5 Choose Next 6 On the Specify Federation Server page enter fs1treyresearchnet 7 Choose Validate to check accessibility 8 Choose Next 9 On the Choose a Client Authentication Certificate page highlight the existing certificate issued to Trey Proxy Client Auth and c hoose Next 10 Choose Install 11 Choose Close to complete the install Apply wildcard certificate to sample application 1 Choose Start > Administrative Tools > Internet I nformation Services (IIS) Manager 2 In the Sites folder right click on ADFSv1 app and choose Edit Bindings 3 Highlight the HTTPS entry and then c hoose the Edit button 4 In the SSL Certificate dropdown choose trey wild ssl 5 Choose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 77 6 Choose Close Configure server bindings for SSL host headers These are the steps to set up multiple applications to use the wildcard certificate with host headers which isn’t possible through the IIS Manager interface The steps add a new HTTPS binding with a host heade r to each website and then delete the previous HTTPS binding that doesn’t include a host header 1 Choose Start > Run 2 In the Run box enter cmd and c hoose OK to open a command prompt 3 Change the directory to c:\windows\system32\inetsrv 4 At the command p rompt enter the following and press Enter : appcmd set site /sitename:“Default Web Site” /+bindings[protocol=’https’bindingInformation=’*:443:fs1treyre searchnet’] You should see the following response: SITE object “Default Web Site” changed 5 Enter the following and press Enter : appcmd set site /sitename:“Default Web Site” / bindings[protocol=’https’bindingInformation=’*:443:’] 6 Enter the following and press Enter : appcmd set site /sitename:“ADFSv1 app” /+bindings[protocol=’https’bindingI nformation=’*:443:adfsv1app treyresearchnet’] 7 Enter the following and press Enter : appcmd set site /sitename:“ADFSv1 app” / bindings[protocol=’https’bindingInformation=’*:443:’] Step by Step: Single Sign on to Amazon EC2Based NET Applications from an On Premises Windows Domain 78 8 In Internet Explorer in the Sites folder right click on Default Web Si te and select Manage Web Site > Start Machine 6: Trey Research Federation Server Add FS proxy client authentication certificate to Federation Server policy 1 Open Console 1 on the desktop 2 Choose Certification Authority/Trey Certificate Server/Issued Certificates 3 In the center pane double click on the issued certificate that used the Trey Proxy Client Auth certificate template to open it 4 On the Details tab c hoose the Copy to file button to start the Certificate Export Wizard 5 On th e Export File Format page leave the default setting 6 On the File to Export page c hoose Browse 7 Choose Desktop 8 in the File name field enter trey proxy client auth public 9 Choose Save > Next > Finish > OK > OK to save trey proxy client auth public cer to the desktop 10 Choose Start > Administrative Tools > Active Directory Federation Services 11 Right click on Trust Policy under Federation Service and choose Properties 12 On the FSP Certificates tab c hoose Add 13 Choose the trey proxy client auth publiccer file from the desktop 14 Choose Open 15 Choose OK Modify firewall settings You can now reduce the scope of allowed inbound connections to the federation server to just the web server and FS proxy – which in this case happen s to be the same machine Other client requests will be handled by the proxy 1 In the Amazon EC2 Console choose Security Groups in the left navigation bar Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Window s Domain 79 2 Choose the Trey Federation Server row to display its current settings 3 In the lower pane c hoose the Remove button next to the current HTTPS setting 4 Add the following setting and c hoose Save : Connection Method Protoc ol From Port To Port Source (IP or Group) HTTPS TCP 443 443 Trey web server external IP/321 * *This is the Elastic IP address for the Trey Research Web Server from Line 13 of the Important values worksheet Machine 3: Adatum web server Edit DNS address for Trey Research Federation Server in internet DNS 5 Choose Start > Administrative Tools > DNS 6 Under Forward Lookup Zones choose treyresearchnet 7 In the right hand pane right click on the record for fs1 and choose Properties 8 In the IP address field enter the Elastic IP address for the Trey Research Web Server from Line 13 of the Important values worksheet 9 Choose OK This redirects all client inbound traffic to the proxy instead of the federation server Machine 1: Adatum internal server Clear DNS cache 1 Choose Start > Administrative Tools > DNS 2 Choose FS1 in the left navigation area 3 In the Action menu select Clear Cache to ensure that the new DNS record for fs1treyresearchnet (pointing to the FS proxy) is used instead of the previous entry Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 80 Machine 2: Domain joined client Clear Internet Explorer DNS cache 1 Choose Start 2 In the search field enter cmd and press Enter to open a command prompt 3 At the prompt enter ipconfig /flu shdns to make sure Internet Explorer uses the new DNS listing for fs1treyresearchnet Test 1 Before testing on either the domain joined or external client you should clear browser cookies to reinitiate the complete federation process 2 In Internet Explor er choose Tools > Internet Options 3 On the General tab under Browsing history choose the Delete button 4 Make sure the box next to Cookies is checked 5 Choose Delete • To test open Internet Explorer on the domain joined client enter https://adfsv1apptreyresearchnet in the address bar and press Enter The home realm discovery page and all security token requests and responses will be handled in this scenario by the Trey Research FS proxy which allows the federation server to scope down its inbound access to just communication from the proxy and web servers You can also test with the External Client; run ipconfig /flushdns to make sure IE uses DNS properly Scenario 5: corporate application accessed internally (AD FS 20) This scenario is the same as Scenario 1 but using different software We will install the beta release of AD FS 20 (formerly known as “Geneva” Server) and use it as our security token issuer On the application side we will use the recently released Windows Identity Foundation (formerly known as “Geneva” Framework) on the web server and use it to support our claims aware application Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 81 These updated components of Microsoft’s claims based application access model represent a substantial upgrad e in capability and flexibility over AD FS v1 To learn more about these improvements visit the “Geneva” site on Microsoft Connect The scenario adds one additional computer to the lab 1 Adatum Federation Server (AD FS 20) This local machine will create security tokens for users to give the federation application Since Adatum already has a domain controller we will leverage that existing deployment In total this machine will run: a Internet Information Services 7 (web server) b Microsoft NET Framework 35 c Active Directory Federation Services 20 (Adatum identity provider) The AD FS 20 federation server (currently in beta) is available as a download from Microsoft here Supported operating systems are Windows Server 2008 Service Pack 2 and Windows Server 2008 R2 This lab used the trial Windows Server 2008 R2 Ente rprise Edition Hyper V image which is available for download here In addition this scenario installs the Windows Identity Foun dation (WIF) onto the Adatum Web Server or Machine 3 The following components are added: a Windows Identity Foundation (NET libraries for claims aware applications) b WIF SDK with sample applications The NET Framework 35 a required component is already installed on the EC2 base Windows machine images Windows Identity Foundation (released November 2009) is available as a download from Microsoft Supported operating systems are Windows Server 2003 Service Pack 2 Windows Server 2008 Service Pack 2 Window s Server 2008 R2 Windows Vista and Windows 7 Amazon EC2 currently offers Windows Server 2003 R2 Service Pack 2 and Windows Server 2008 Service Pack 2 as guest operating systems This lab uses our existing Adatum Web Server which is running Windows Serve r 2008 Service Pack 2 Therefore our download locations are here for the runtime and here for the SDK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 82 Configuration Machine 1: Adatum Internal Server Modify AD CS certificate template permissions 1 Open Console 1 from the desktop 2 Choose Certificate Templates in the left navigation area 3 In the center pane right click on the Web Server certificate template and choose Properties 4 On the Security tab c hoose Add 5 In the object names text box enter Domain Computers 6 Choose Check Names 7 Once verified c hoose OK 8 Back in the Security tab highlight the Domain Computers list item 9 In the Allow column check the Read and Enroll permissions 10 Choose OK Machine 8: Adatum Federation Server (AD FS 20) The configuration steps listed below are targeted to Windows Server 2008 R2 If using a different version of Windows Server use these steps as a guideline only Initial install Install Windows Server 2008 R2 on your server computer or virtual machine If you use the Windows Server 2008 R2 trial VHD for both the domain controller and a member server on the same network those machines will have the same security identifier (SID) potentially causing domain related issues later To defend against this run Sysprep on the second VHD instance as fol lows: 1 Navigate to the c:\Windows\System32\sysprep folder and double click on sysprepexe to open the System Preparation Tool 2 In the System Cleanup Action dropdown leave Enter System Out ofBox Experience selected 3 In the Shutdown Options dropdown box select Reboot Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 83 4 Choose OK 5 Accept the defaults through the rest of the process Configure networking This computer requires inbound internet connectivity through a static external IP address through port 443 to allow the EC2 based web ser ver to communicate with the AD FS federation server Contact your network administrator to request a static IP address and to open port 443 on the external IP address 1 In the Initial Configuration Tasks window c hoose Configure networking 2 Rightclick on the Local Area Connection and choose Properties 3 Double click on the Internet Protocol Version 4 list item to open TCP/IPv4 Properties 4 On the General tab c hoose the radio button to Use the following DNS server address 5 In the Preferred DNS server field enter the static domain IP address of the Adatum Internal Server from Line 3 of the Important values worksheet 6 Choose OK twice 7 In Initial Configuration Tasks choose Provide computer name and dom ain 8 Choose Change 9 Enter fs2 in the computer name field 10 In the Member of area c hoose the radio button for Member of Domain 11 In the Domain text box enter CORP 12 Choose OK 13 Enter the Adatum domain administrator user name and password from Line 2 of the Important values worksheet 14 Choose OK 15 Follow prompts to restart computer 16 Log back in to the machine with the CORP \administrator account using the password from Line 2 of the Import ant values worksheet Optional Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 84 17 Turn off the Internet Explorer Enhanced Security Configuration for administrators 18 In Server Manager on the Server Summary page under Security Information choose Configure IE ESC 19 Under Administrators choose the Off radio button 20 Choose OK Identify external IP addresses Identify your external IP address You can ask your network administrator or an alternative is to visit http://wwwwhatismyipcom Record your Adatum Federation Server (AD FS 20) external IP address on Line 14 of the Important values worksheet Create server authentication certificate 1 Choose Start > Run 2 In the Run box enter mmc and c hoose OK to start the Microsoft Management Console 3 In the File menu choose Add/Remove Snap in 4 Highlight the Certificates snap in and c hoose the Add button 5 Choose computer account and local computer in the pages that follow 6 Choose OK 7 Choose File > Save and save the new MMC console (Console 1) to the machine desktop for future use 8 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Request New Certificate 9 In the Certificate Enrollment Wizard choose Next twice 10 Choose the link under Web Server If the Web Server template isn’t yet showing you can speed the process by restarting the Active Directory Certificate Services service on the Adatum Internal Server 11 In Certificate Properties make the following changes: a On the Subjec t tab in the Subject Name area choose on the Type dropdown and choose Common name Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 85 b In the Value field enter fs2corpadatumcom c Choose Add d On the General tab in the Friendly name text box enter adatum fs2 ssl e Choose OK 12 In the Certificate Enrollment window check the box next to Web Server 13 Choos e the Enroll button 14 Choose Finish 15 In Console 1 check for the new certificate with friendly name “ adatum fs2 ssl ” in Certificates (Local Computer)/Personal/Certificates Create AD FS token signing certificate 1 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Request New Certificate 2 In the Certificate Enrollment Wizard choose Next twice 3 Choose the link under Web Server 4 In Certificate Properties make the following changes: a On the Subject tab in the Subject Name area choose the Type dropdown list and choose Common name b In the Value field enter Adatum Token Signing Cert3 c Choose Add d On the General tab in the Friendly name text box enter adatum ts3 e Choose OK 5 In the Certificate Enrollment window check the box next to Web Server 6 Choose the Enroll button 7 Choose Finish 8 In Console 1 check for the new certificate with friendly name adatum ts3 in Certificates (Local Computer)/Personal/Certificates Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 86 Modify read permission to token signing private key AD FS 20 runs using the Network Service account which needs access to the token signing certificate private key in order to use it for signing security tokens and federa tion metadata 1 Go to Certificates (Local Computer)/Personal/Certificates and select All Tasks > Manage Private Keys 2 Choose Add 3 In the object text box enter Network Service 4 Choose Check Names 5 Once verified c hoose OK twice Install AD FS 20 1 Downlo ad the AD FS 20 installation media from here and save to your machine 2 Run the saved file to start the AD FS 20 Installation Wizard The AD FS 20 installer automatically installs NET Framework 35 and IIS 75 in Windows Server 2008 R2 3 When the wizard completes c hoose Finish to automatically start the AD FS 20 Management Console 4 In the AD FS 20 Management Console choose the link in the center pane to launch the AD FS 20 Federation Server Configuration Wizard 5 On the Welcome page leave the default to Create a new Federat ion Service 6 On the Select Stand Alone or Farm Deployment page choose Stand alone federation server 7 In the Specify the Federation Service Name page in the SSL certificate dropdown choose adatum fs2 ssl 8 Choose Next twice to begin the configuration p rocess 9 Choose Close Add token signing certificate in AD FS 1 Choose Start > Administrative Tools > Windows PowerShell Modules 2 At the PowerShell command prompt enter the following and press Enter : Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 87 SetADFSProperties AutoCertificateRollover $false This will disable the automatic certificate rollover feature in AD FS a prerequisite to adding a token signing certificate Leave PowerShell open for later use 3 In the AD FS 20 Management Console choose AD FS 20/Service/C ertificates in the left navigation area 4 In the right hand pane under Actions choose the link to Add Token Signing Certificate 5 In the new window select the adatum ts3 certificate 6 Choose OK 7 Back in the center pane of AD FS 20 Management in the Tokensigning section right click on Adatum Token Signing Cert3 and choose Set as Primary 8 Choose Yes 9 Right click on the other listed token signing certificate ( CN=ADFS Signing… ) and choose Delete 10 In the PowerShell command window at the command prompt enter the following and press Enter : SetADFSProperties AutoCertificateRollover $true Machine 3: Adatum web server Add record for Adatum federation server (AD FS 20) to hosts file This web server will access the Adatum Federation Server (AD FS 20) to automatically get federation trust policy data This data could be manually exchanged thus eliminating the need for the web server and federation server to communicate directly and eliminating the need for inbound HTTPS connectivity to the federation server However the approach used here allows for automated periodic updating of trust policy information 1 Double click the shortcut on the desktop for the hosts file 2 Choose Notepad as the program Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 88 3 Choose OK 4 Add th e name and external IP address of the Adatum Federation Server (AD FS 20) from Line 14 of the Important values worksheet as shown in the following example: 12345678910 fs2corpadatumcom 5 Save and close the file Create wildcard server authentication certificate As in Scenario 4 this web server will now use a wildcard SSL server authentication certificate and host headers to allow secure access to the Adatum AD FS v1 and AD FS 20 apps simultaneously 1 Open Intern et Explorer and go to https://fs1corpadatumcom/certsrv/ 2 At the login prompt log in as administrator with the password from on Line 2 of the Important values worksheet to reach the Active Directory Certificate Services home pa ge 3 Choose the link to Request a certificate 4 Choose the link for advanced certificate request 5 Choose the link to Create and submit a request to this CA 6 On the Advanced Certificate Request page in the Certificate Template dropdown choose Extranet Web Server 7 In the Identifying Information section in the Name field enter *adatumcom and leave the other fields blank 8 In the Additional Options section in the Friendly Name field enter adatum wild ssl 9 Choose Submit 10 Choose Yes to comp lete the request process; the certificate will be issued automatically 11 Choose the link to Install this certificate 12 Choose Yes on the warning dialog 13 In Console 1 choose Certificates (Current User)/Personal/Certificates Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 89 • The certificate for *adatumcom should be in the right hand pane • Leave the AD CS web application open for use in upcoming steps Move wildcard certificate to local computer certificate store 1 In Console 1 right click on the *adatumcom certificate and choose All Tasks > Export to launch the Certificate Export Wizard 2 On the Export Private Key page choose Yes export the private key 3 On the Export File Format page leave the default setting 4 Provide a password 5 On the File to Export page choose Browse 6 Choose Desktop 7 In the File name field enter adatum wild ssl 8 Choose Save > Next > Finish > OK to complete the export process 9 In Console 1 right click on Certificates (Local Computer)/Personal and choose All Tasks > Import to launch the Certificate Import W izard 10 On the File to Import page c hoose Browse and find adatum wild sslpfx on the desktop 11 Choose Open 12 Choose Next 13 Enter the password 14 Choose Next > Next > Finish > OK to complete the import process Install Windows Identity Foundation runtime and SDK 1 Download the Windows Identity Foundation runtime here 2 Make sure to pick the media with the words Windows60 in the title 3 In the Download Complete window choose Open to start the installation 4 When the wizard completes c hoose Close 5 Download the Windows Identity Foundation SDK here 6 In the Download Complete window choose Run twice to start the Setup Wizard Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 90 7 Accept all of the defaults in the wizard 8 Choose Finish Add AD FS 20 S ample application to IIS You will use a sample application installed on the machine with the WIF SDK 1 Choose Start > Administrative Tools > Internet Information Services (IIS) Manager 2 Right click on the Sites folder in the left navigation area and choose Add Web Site 3 In the Site name field enter ADFSv2 app 4 In the Content Directory section c hoose the button to the right of the Physical path field and browse to c:\Program Files \Windows Identity Foundation SDK \v35\Samples\Quick Start \Using ManagedSTS \ClaimsAwareWebAppWithManagedSTS 5 Choose OK 6 In the Binding section in the Type dropdown choose https 7 In the SSL certificate dropdow n choose adatum wild ssl 8 Choose OK 9 Choose Yes This will automatically assign adatum wild ssl to both the ADVSv1 and ADFSv2 applications Configure server bindings for SSL host headers 1 Choose Start > Run 2 In the Run box enter cmd and c hoose OK to open a command prompt 3 Change the directory to c:\windows\system32\inetsrv 4 At the command prompt enter the following and press Enter : appcmd set site /sitena me:“ADFSv1 app” /+bindings[protocol=’https’bindingInformation=’*:443:adfsv1app adatumcom’] You should see the following response: Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 91 SITE object “Default Web Site” changed 5 Enter the following and press Enter : appcmd set site /sitename:“ADFSv1 app” / bindings[protocol=’https’bindingInformation=’*:443:’] 6 Enter the following and press Enter : appcmd set site /sitename:“ADFSv2 app” /+bindings[protocol=’https’bindingInformation=’*:443:adfsv2app adatumcom’] 7 Enter the following and press Enter : appcmd set site /sitename:“ADFSv2 app” / bindings[protocol=’https’bindingInformation=’*:443:’] 8 In Internet Explorer in the Sites folder right click on ADFSv2 app and choose Manage Web Site > Start Add record for AD FS 20 sample application in internet DNS 1 Choose Start > Administrative Tools > DNS 2 Right click on <Machine name>/Forward Lookup Zones/adatum com and choose New Host (A or AAAA) 3 In the New Host Name field enter adfsv2app 4 In the IP address field enter the Elastic IP address for the Adatum Web Server from Line 8 of the Important values worksheet 5 Choose Add Host > OK > Done Run Windows Identity Foundation Federation utility This tool automatically modifies an application’s webconfig file to support claims It can be run standalone (as we’re doing here) or launched from inside Visual Studio 1 Choose Start > Administrative Tools > Windows Identity Foundation Federation Utility to launch the Federation Utility Wizard Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 92 2 On the Welcome page in the Application configuration location section choose Browse and navigate to c:\Program Files \Windows Identity Foundation SDK \v35\Samples\Quick Start \Using Managed STS\ClaimsAwareWebAppWithManagedSTS/webconfig 3 Choose Open 4 In the Application URI field enter https://adfsv2appadatumcom/ 5 Choose Next 6 On the Security Token Service page choose Use an existing STS 7 In the STS WS Federation metadata document location field enter https://fs2corpadatumcom/FederationMetadata/2007 06/FederationMetadataxml 8 Choose Test Location 9 Once you see the xml file choose Next 10 On the Security Token Encryption page leave the default No encryption setting 11 Choose Next > Next > Finish > OK Machine 8: Adatum Federation Server (AD FS 20) Add sample application as a relying party trust 1 Click Start > Administrative Tools > AD FS 20 Management 2 In the center pane c hoose the link to Add a trusted relying party to start the Add Rel ying Party Trust Wizard 3 On the Select Data Source page in the federation metadata address field enter https://adfsv2appadatumcom/FederationMetadata/2007 06/FederationMetadataxml 4 Choose Next 5 Choose Next > Next > Next > Close to complete the wizard and automatically open the Edit Claim Rules window 6 On the Issuance Transform Rules tab c hoose Add Rule to start the Add Transform Claim Rule Wizard Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 93 7 On the Choose Rule Type page leave the default Send LDAP Attributes as Claims selected and c hoose Next 8 On the Configure Claim Rule page in the Claim rule name field enter Rule1 9 In the Attribute store dropdown choose Active Directory 10 In the LDAP Attribute dropdown choose Display Name 11 In the adjoining Outgoing Claim Type dropdown choose Name 12 Choose Finish 13 On the Issuance Transform Rules tab c hoose Add Rule again 14 On the Choose Rule Type page choose Send Group Membership as a Claim 15 Choose Next 16 On the Configure Claim Rule page in the Claim rule name field enter Rule2 17 Choose the Browse button 18 In the object name text box enter Managers 19 Choose Check Names 20 Once verified c hoose OK 21 In the Outgoing Claim Type dropdown choose Role 22 In the Outgoing claim value field enter PriorityUsers 23 Choose Finish 24 Choose OK Configure firewall settings 1 Choose Start > Administrative Tools > Windows Firewall with Advanced Security 2 Choose Inbound Rules in the left navigation area 3 In the right hand pan e under Actions choose Filter by Group and select Filter by Secure World Wide Web Services (HTTPS) 4 In the center pane right click on the World Wide Web Services (HTTPS Traffic In) rule and choose Properties Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 94 5 In the Properties dialog box c hoose the Scope tab 6 In the Remote IP address section c hoose the radio button next to These IP addresses 7 Choose Add 8 In the IP Address window in the This IP address or subnet field enter the Elastic IP address of the Adatum Web Server from Line 8 of the Important values worksheet 9 Choose OK 10 Choose Add again 11 Enter the internal IP address of the domain joined client from Line 6 of the Important values worksheet 12 Choose OK twice In AD FS 20 the FS proxy se rver (which is not being used here) handles more functionality than in AD FS v1 In addition to the prior capability of handling external client token requests the server can now also be a proxy for web servers requesting trust policy information This al lows administrators to scope down internet traffic inbound to the federation server to only the FS proxy and not include individual web servers (as we have done above) Machine 2: Domain joined client Add Adatum Federation Server (AD FS 20) URL to intran et zone in group policy 1 Click Start > Administrative Tools > Group Policy Management 2 Right click on Forest:corpadatumcom/Domains/corpadatumcom/Default Domain Policy and choose Edit 3 Choose User Configuration/Policies/Windows Settings/Internet Explorer Maintenance/Security 4 In the left hand pane right click on Security Zones and Content Ratings and choose Properties 5 In the Security Zones and Privacy section c hoose the radio button next to Import the current security zones and privacy settings 6 Choose Continue Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 95 7 Choose Modify Settings 8 In the Internet Properties window on the Security tab highlight the Local Intranet zone and c hoose the Sites button 9 Choose Advanced 10 in the Add this webs ite to the zone text box enter https://fs2corpadatumcom 11 Choose Add 12 Choose Close 13 Choose OK twice Update group policy settings 1 Choose Start 2 In the search field enter cmd and press Enter to open a command prompt 3 At the prompt enter gpupdate /force to ensure the IE Intranet Zone is updated on the client machine Test • To test the scenario open Internet Explorer in the domain joined client enter https://adfsv2appadatumcom in the address bar and press Enter You should be presented with access to the WIF sample claims aware application hosted on EC2 without being asked for a password Note the claims that were passed to the application including the PriorityUsers claim that was based on Active Directory group membership If you are running into errors it’s possible that you are having certificate verification issues See Appendix B for more information Appendix A: Sample federated application files 1 Start Notepad 2 Copy/paste th is entire Appendix into a new text file 3 Download the text file to the desktop of your EC2 based Adatum Web Server A webbased storage service such as OneDrive can be useful here Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 96 4 On the Adatum Web Server open the text file in Notepad **DEFAULTASPX** 1 Copy the following section to the clipboard: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Defaultaspxcs" Inherits="_Default" %> <%@ OutputCache Location="None" %> <!DOCTYPE html PUBLIC "//W3C//DTD XHTML 11//EN" "http://wwww3org/TR/xhtml11/DTD/xhtml11dtd"> <html xmlns="ht tp://wwww3org/1999/xhtml" > <head> <meta httpequiv="Content Language" content="en us"> <meta httpequiv="Content Type" content="text/html; charset=windows 1252"> <title> Claimsaware Sample Application</title> <style> <! pagetitle { fontfamily: Verdana; fontsize: 18pt; fontweight: bold;} propertyTable td { border: 1px solid; padding: 0px 4px 0px 4px} propertyTable th { border: 1px solid; padding: 0px 4px 0px 4px; fontweight: bold; background color: #cccccc ; textalign: left } propertyTable { border collapse: collapse;} tdl{ width: 200px } trs{ background color: #eeeeee } banner { margin bottom: 18px } propertyH ead { margin top: 18px; font size: 12pt; font family: Arial; font weight: bold; margintop: 18} abbrev { color: #0066FF; fontstyle: italic } </style> </head> <body> <form ID="Form1" runat=server> <div class=banner> <div class=pagetitle>Adatum SSO Sample (ADFSv1)</div> [ <asp:HyperLink ID=SignOutUrl runat=server>Sign Out</asp:HyperLink> | <a Step by Step: Single Sign on to Amazon EC 2Based NET Applications from an On Premises Windows Domain 97 href="<%=ContextRequestUrlGetLeftPart(UriPartialPath)%>">Refres h without viewstate data</a>] </div> <div class=propertyHead>Page Information</div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table runat=server ID=PageTable CssClass=propertyTable> <asp:TableHeaderRow> <asp:TableHeaderCell>Name</asp:TableHeaderCell> <asp:TableHeaderCell>Value</asp:TableHeaderCell> <asp:TableHeaderCell>Type</asp:TableHeaderCell> </asp:TableHeaderRow> </asp:Table> </div> <div class=propertyHead>UserIdentity</div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table CssClass="propertyTable" ID=IdentityTable runat=server> <asp:Tabl eHeaderRow> <asp:TableHeaderCell>Name</asp:TableHeaderCell> <asp:TableHeaderCell>Value</asp:TableHeaderCell> <asp:TableHeaderCell>Type</asp:TableHeaderCell> </asp:TableHeaderRow> </asp:Table> </div> <div class=propertyHead>(IIdentity) UserIdentity</div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table CssClass="propertyTable" ID=BaseIdentityTable runat=server> <asp:TableHeaderRow> <asp:TableHeaderCell>Name</asp:TableHeaderCell> <asp:TableHeaderCell>Value</asp:TableHeaderC ell> <asp:TableHeaderCell>Type</asp:TableHeaderCell> </asp:TableHeaderRow> </asp:Table> </div> <div class=propertyHead>(SingleSignOnIdentity)UserIdentity</div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table CssClass=" propertyTable" ID=SSOIdentityTable runat=server> <asp:TableHeaderRow> <asp:TableHeaderCell>Name</asp:TableHeaderCell> <asp:TableHeaderCell>Value</asp:TableHeaderCell> Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 98 <asp:TableHeaderCell>Type</asp:TableHeaderCell> </asp:TableHeaderRow> </asp:Table> </div> <div class=propertyHead>SingleSignOnIdentitySecurityPropertyCollection< /div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table CssClass="propertyTable" ID=SecurityPropertyTable runat=server> <asp:TableHeaderRow> <asp:TableHeaderCell>Uri</as p:TableHeaderCell> <asp:TableHeaderCell>Claim Type</asp:TableHeaderCell> <asp:TableHeaderCell>Claim Value</asp:TableHeaderCell> </asp:TableHeaderRow> </asp:Table> </div> <div class=propertyHead>(IPrincipal)UserIsInRole()</div> <div style="padding left: 10px; paddingtop: 10px"> <asp:Table CssClass="propertyTable" ID=RolesTable runat=server> </asp:Table> <div style="padding top: 10px"> <table> <tr><td>Roles to check (semicolon separated):</td></tr> <tr><td><asp:TextBox ID=Roles Columns=55 runat=server/></td><td align=right><asp:Button UseSubmitBehavior=true ID=GetRoles runat=server Text="Check Roles" OnClick="GoGetRoles"/></td></tr> </table> </div> </div> </form> </body> </html> 2 On the desktop right click and choose New > Text Document 3 Double click on the file to open it then paste the clipboard contents into the file 4 Choose File > Save As Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 99 5 In the Save as Type dropdow n choose All Files and save the file as defaultaspx in the c:\inetpub\adfsv1app directory Saving directly into this folder (as opposed to drag anddrop from the desktop for example) will ensure that web friendly ACLs are set on the files **WEBCONFIG** 1 Copy the following section to the clipboard: <?xml version="10" encoding="utf 8" ?> <configuration> <configSections> <sectionGroup name="systemweb"> <section name="websso" type="SystemWebSecuritySingleSignOnWebSsoConfigurat ionHandler SystemWebSecuritySingleSignOn Version=1000 Culture=neutral PublicKeyTok en=31bf3856ad364e35 Custom=null" /> </sectionGroup> </configSections> <systemweb> <sessionState mode="Off" /> <compilation defaultLanguage="c#" debug="true"> <assemblies> <add assembly="SystemWebSecuritySingleSignOn Version=1000 Culture=neutral PublicKeyToken=31bf3856ad364e35 Custom=null"/> <add assembly="SystemWebSecuritySingleSignOnClaimTransforms Version=1000 Culture=neutral PublicKeyToken=31bf3856ad364e35 Custom=null"/> </assemblies> </compilation > <customErrors mode="Off"/> <authentication mode="None" /> <httpModules> <add Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 100 name="Identity Federation Services Application Authentication Module" type="SystemWebSecuritySingleSignOnWebSsoAuthenticatio nModule SystemWebSecuritySingleSignOn Version=1000 Culture=neutral PublicKeyToken=31bf3856ad364e35 Custom=null" /> </httpModules> <websso> <authenticationrequired /> <eventloglevel>55</eventloglevel> <auditsuccess>2</auditsuccess> <urls> <returnurl>https://adfsv1appadatumcom/</returnurl> </urls> <cookies writecookies="true"> <path>/</path> <lifetime>240</lifetime> </cookies> <fs>https://fs1corpadatumcom/adfs/fs/federationserverserviceasm x</fs> </websso> </systemweb> <systemdiagnosti cs> <switches> <add name="WebSsoDebugLevel" value="255" /> <! Change to 255 to enable full debug logging > </switches> <trace autoflush="true" indentsize="3"> <listeners> <add name="LSLogListener" type="SystemWebSecuritySingleSignOnBoundedSizeLogFileTraceListe ner SystemWebSecuritySingleSignOn Version=1000 Culture=neutral PublicKeyToken=31bf3856ad364e35 Custom=null" initializeData="c: \ADFS_app_logs \adfsv1applog" /> </listeners> </trace> </systemdiagnostics> </configuration> 1 On the desktop double click on the New Text Document then paste the clipboard contents into the file Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 101 2 Choose File > Save As 3 In the Save as Type dropdown choose All Files and save the file as webconfig in the c:\inetpub\adfsv1app directory **DEFAULTASPXCS** 1 Copy the following section to the clipboard: using System; using SystemData; using SystemCollectionsGeneric; using SystemConfiguration; using SystemReflection; using SystemWeb; using SystemWebSecurity; using SystemWebUI; using SystemWebUIWebControls; using SystemWebUIWebControlsWebParts; using SystemWebUIHtmlControls; using SystemSecurity; using SystemSecurityPrincipal; using SystemWebSecuritySingleSignOn; using SystemWebSecuritySingleSignOnAuthorization; public partial class _Default : SystemWebUIPage { const string NullValue = "<span class= \"abbrev\" title= \"Null Reference or not applicable \"><b>null</b></span>" static Dictionary<string string> s_abbreviationMap; static _Default() { s_abbreviationMap = new Dictionary<string string>(); // // Add any abbreviations here Make sure that prefixes of // replacements occur *after* the longer replacement key // s_abbreviationMapAdd("SystemWebSecuritySingleSignOnAutho rization" "SSOAuth"); s_abbreviationMapAdd("SystemWebSecuritySingleSignOn" "SSO"); s_abbreviationMapAdd("System" "S"); } protected void Page_Load(object sender EventArgs e) { SingleSignOnIdentity ssoId = UserIdentity as Step by Step: Single Sign on to Amazon EC2 Based NET A pplications from an On Premises Windows Domain 102 SingleSignOnIdentity; // // Get some property tables initialized // PagePropertyLoad(); IdentityLoad(); BaseIdentityLoad(); SSOIdentityLoad(ssoId); SecurityPropertyTableLoad(ssoId); // // Filling in the roles table // requires a peek at the viewstate // since we have a text box driving this // if (!IsPostBack) { UpdateRolesTable(new string[] { }); } else { GoGetRoles(null null); } // // Get the right links for SSO // if (ssoId == null) { SignOutUrlText = "Single Sign On isn't installed"; SignOutUrlEnabled = false; } else { if (ssoIdIsAuthenticated == false) { SignOutUrlText = "Sign In (you aren't authenticated)"; SignOutUrlNavigateUrl = ssoIdSignInUrl; } else SignOutUrlNavigateUrl = ssoIdSignOutUrl; } Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 103 } void SecurityPropertyTableLoad(SingleSignOnIdentity ssoId) { Table t = SecurityPropertyTable; if (ssoId == null) { AddNullValueRow(t); return; } // // Go through each of the security properties provided // bool alternating = false; foreach (SecurityProperty securityProperty in ssoIdSecurityPropertyCollection) { tRowsAdd(CreateRow(securityPropertyUri securityPropertyName securityPropertyValue alternating)); alternating = !alternating; } } void UpdateRolesTable(string[] roles) { Table t = RolesTable; tRowsClear(); bool alternating = false; foreach (string s in roles) { string role = sTrim(); tRowsAdd(CreatePropertyRow(role UserIsInRole(role) alternating)); alternating = !alternating; } } void IdentityLoad() { Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 104 Table propertyTable = IdentityTable; if (UserIdentity == null) { AddNullValueRow(propertyTable); } else { propertyTableRowsAdd(CreatePropertyRow("Type name" UserIdentityGetType()FullName)); } } void SSOIdentityLoad(SingleSignOnIdentity ssoId) { Table propertyTable = SSOIdentityTable; if (ssoId != null) { PropertyInfo[] props = ssoIdGetType()GetProperties(BindingFlagsInstance | BindingFlagsPublic | BindingFlagsDeclaredOnly); AddPropertyRows(propertyTable ssoId props); } else { AddNullValueRow(propertyTable); } } void PagePropertyLoad() { Table propertyTable = PageTable; string leftSidePath = RequestUrlGetLeftPart(UriPartialPath); propertyTableRowsAdd(CreatePropertyRow("Simplified Path" leftSidePath)); } void BaseIdentityLoad() { Table propertyTable = BaseIdentityTable; Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 105 IIdentity identity = UserIdentity; if (identity != null) { PropertyInfo[] props = typeof(IIdentity)GetProperties(BindingFlagsInstance | BindingFlagsPublic | BindingFlagsDeclaredOnly); AddPropertyRows(propertyTable identity props); } else { AddNullValueRow(propertyTable); } } void AddNullValueRow(Table table) { TableCell cell = new TableCell(); cellText = NullValue; TableRow row = new TableRow(); rowCssClass = "s"; rowCellsAdd(cell); tableRowsClear(); tableRowsAdd(row); } void AddPropertyRows(Table propertyTable object obj PropertyInfo[] props) { bool alternating = false; foreach (PropertyInfo p in props) { string name = pName; object val = pGetValue(obj null); propertyTableRowsAdd(CreatePropertyRow(name val alternating)); alternating = !alternating; } Step by Step: S ingle Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 106 } TableRow CreatePropertyRow(string propertyName object propertyValue) { return CreatePropertyRow(propertyName propertyValue false); } TableRow CreatePropertyRow(string propertyName object value bool alternating) { if (value == null) return CreateRow(propertyName null null alternating); else return CreateRow(propertyName valueToString() valueGetType()FullName alternating); } TableRow CreateRow(string s1 string s2 string s3 bool alternating) { TableCell first = new TableCell(); firstCssClass = "l"; firstText = Abbreviate(s1); TableCell second = new TableCell(); secondText = Abbreviate(s2); TableCell third = new TableCell(); thirdText = Abbreviate(s3); TableRow row = new TableRow(); if (alternating) rowCssClass = "s"; rowCellsAdd(first); rowCellsAdd(second); rowCellsAdd(third); return row; } private string Abbreviate(string s) { if (s == null) return NullValue; Step by Step: Single Sign on to Amazon EC2 Based NET Applications fr om an On Premises Windows Domain 107 string retVal = s; foreach (KeyValuePair<string string> pair in s_abbreviationMap) { // // We only get one replacement per abbreviation call // First one wins // if (retValIndexOf(pairKey) != 1) { string replacedValue = stringFormat("<span class=\"abbrev\" title=\"{0}\">{1}</span>" pairKey pairValue); retVal = retValReplace(pairKey replacedValue); break; } } return retVal; } // // ASPNET server side callback // protected void GoGetRoles(object sender EventArgs ea) { string[] roles = RolesTextSplit(';'); UpdateRolesTable(roles); } } 2 On the desktop double click on the New Text Document then paste the clipboard contents into the file 3 Choose File > Save As 4 In the Save as Type dropdown choose All Files and save the file as defaultaspxcs in the c:\inetpub\adfsv1app directory Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 108 Appendix B: Certificate verification troubleshooting In this lab the most common reasons for errors have to do with checking the certification revocation list (CRL) for the Adatu m certificate authority (CA) to verify that the AD FS token singing certificate has not been revoked There are a number of ways that CRL checking can break leading to testing errors: • If the Adatum Internal Server (which hosts our Adatum CA) is a Hyper V image and in a Saved state at the time it is supposed to issue a CRL or Delta CRL it will not automatically issue the skipped CRL file upon being restored to a Running state The old expired CRL file will not be replaced and CRL checking will fail Th is can be fixed by going to Start > Administrative Tools > Services and restarting the Active Directory Certificate Services service • If the Adatum FS Proxy (which hosts our Adatum CRL files starting in Scenarios 2) is in a Stopped (Amazon EC2) or Saved ( Hyper V) state when a new CRL file is issued by the Adatum CA it will not receive the new CRL file If a web server accesses the CRL website before it’s been updated with the fresh CRL files it will retrieve old CRL files that will break the test Howeve r the robocopy command used to copy the files reruns continuously every 30 seconds until it succeeds in transferring the files meaning the fresh CRL files should be in place approximately two minutes after the Adatum FS Proxy is restored to a Running state • CRL files are cached on the web server(s) until they expire If you cannot get the web server to properly perform the CRL check and the solutions above have not solved the problem then a way to “start over” is to delete the CRL cache on the web server Do the following: a Log into the Adatum Web Server or the Trey Research Web Server – whichever is the destination of your testing b Choose Start > Computer and click through to c:\Windows\ServiceProfiles \NetworkService c Choose the Organize dropdown and choose Folder and Search Options d On the View tab click to fill the radio button next to Show Hidden Files and Folders and c hoose OK Step by Step: Single Sign on to Amazon EC2 Based NET Applications from an On Premises Windows Domain 109 e Continue clicking through to c:\Windows\ServiceProfiles \NetworkService \AppData\LocalLow\M icrosoft\ CryptUrlCache f Delete all the content of both the Content and Metadata subfolders in the CryprUrlCache folder g Empty the Recycle Bin on the desktop h In IIS Manager in the left pane choose the connection to the local web server (IPabcd… ) i In the right hand pan e under Actions choose Restart The easiest way to avoid these issues is to not put any of the machines in this lab into a Saved or Stopped state during your testing
|
General
|
consultant
|
Best Practices
|
Web_Application_Hosting_in_the_AWS_Cloud_Best_Practices
|
Web Application Hosting in the AWS Cloud First Published May 2010 Updated August 20 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents An overview of traditional web hosting 1 Web application hosting in the cloud using AWS 2 How AWS can solve common web application hosting issues 2 An AWS Cloud architecture for web hosting 4 Key components of an AWS web hosting architecture 6 Key considerations when using AWS for web hosting 16 Conclusion 18 Contributors 19 Further reading 19 Document versions 19 Abstract Traditional on premises web architectures require complex solutions and accurate reserved capacity forecast in order to ensure reliability Dense peak traffic periods and wild swings in traffic patterns result in low utilization rates of expensive hardware This yields high operating costs to maintain idle hardware and an inefficient use of capital for underused hardware Amazon Web Services (AWS) provides a reliable scalable secure and highly performing infrastructure for the most demanding web applic ations This infrastructure matches IT costs with customer traffic patterns in near real time This whitepaper is meant for IT Managers and System Architects who want to understand how to run traditional web architectures in the clou d to achieve elasticity scalability and reliabilityAmazon Web Services Web Appli cation Hosting in the AWS Cloud Page 1 An overview of traditional web hosting Scalable web hosting is a well known problem space The following image depicts a traditional web hosting architecture that implements a common three tier web application model In this model the architecture is separated into presentation application and persistence layers Scalability is provided by adding hosts at these layers The architecture also has built in performance failover and availability feature s The traditional web hosting architecture is easily ported to the AWS Cloud with only a few modifications A traditional web hosting architecture Amazon Web Services Web Application Hosting in the AWS Cloud Page 2 The following sections look at why and how such an architecture should be and could be deployed in the AW S Cloud Web application hosting in the cloud using AWS The first question you should ask concerns the value of moving a classic web application hosting solution into the AWS Cloud If you decide that the cloud is right for you you’ll need a suitable architecture This section helps you evaluate an AWS Cloud solution It compares deploying your web application in the cloud to an on premises deployment presents an AWS Cloud architecture for hosting your application and discusses the key components of the AWS Cloud Architecture solution How AWS can solve commo n web application hosting issues If you’re responsible for running a web application you could face a variety of infrastructure and architectural issues for which AWS can provide seamless and cost effective solutions The f ollowing are some of the benefit s of using AWS over a traditional hosting model A costeffective alternative to oversized fleets needed to handle peaks In the traditional hosting model you have to provision servers to handle peak capacity Unused cycles are wasted outside of peak periods Web applications hosted by AWS can leverage on demand provisioning of additional servers so you can constantly adjust capacity and costs to actual traffic patterns For example the following graph shows a web application with a usage peak from 9AM to 3PM and less usage for the remainder of the day An automatic scaling approach based on actual traffic trends which pr ovisions resources only when needed would result in less wasted capacity and a greater than 50 percent reduction in cost Amazon Web Services Web Application Hosting in the AWS Cloud Page 3 An example of wasted capacity in a classic hosting model A scalable solution to handling unexpected traffic peaks A more dire consequence of the slow provisioning associated with a traditional hosting model is the inability to respond in time to unexpected traffic spikes There are a number of stories about web applications becoming unavailable because of an unexpecte d spike in traffic after the site is mentioned in popular media In the AWS Cloud the same on demand capability that helps web applications scale to match regular traffic spikes can also handle an unexpected load New hosts can be launched and are readily available in a matter of minutes and they can be taken offline just as quickly when traffic returns to normal An ondemand solution for test load beta and preproduction environments The hardware costs of building and maintaining a traditional hosting environment for a production web application don’t stop with the production fleet Often you need to create preproduction beta and testing fleets to ensure the quality of the web application at each stage of the development lifecycle While you can mak e various optimizations to ensure the highest possible use of this testing hardware these parallel fleets are not always used optimally and a lot of expensive hardware sits unused for long periods of time Amazon Web Services Web Application Hosting i n the AWS Cloud Page 4 In the AWS Cloud you can provision testing fle ets as and when you need them This not only eliminates the need for pre provisioning resources days or months prior to the actual usage but gives you the flexibility to tear down the infrastructure components when you do not need them Additionally you can simulate user traffic on the AWS Cloud during load testing You can also use these parallel fleets as a staging environment for a new production release This enables quick switchover from current production to a new application version with little or no service outages An AWS Cloud architecture for web hosting The following figure provides another look at that classic web application architecture and how it can leverage the AWS Cloud computing infrastructure Amazon Web Services Web Application Hosting in the AWS Cloud Page 5 An example of a web hosting architecture on AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 6 1 DNS services with Amazon Route 53 – Provides DNS services to simplify domain management 2 Edge caching with Amazon CloudFront – Edge caches high volume content to decrease the latency to customers 3 Edge security for Amazon CloudFront with AWS WAF – Filters malicious traffic including cross site scripting ( XSS) and SQL injection via customer defined rules 4 Load balancing with Elastic Load Balancing (ELB) – Enables you to spread load across multiple Availabilit y Zones and AWS Auto Scaling groups for redundancy and decoupling of services 5 DDoS protection with AWS Shield – Safeguards your infrastructure against the most common network and transport layer DDoS attacks automatically 6 Firewalls with security groups – Moves security to the instance to provide a stateful host level firewall for both web and application servers 7 Caching with Amazon ElastiCache – Provides caching services with Redis or Memcached to remove load from the app and database and lower latency for frequent requests 8 Managed database with Amazon Relational Database Service (Amazon RDS) – Crea tes a highly available multiAZ database architecture with six possible DB engines 9 Static storage and backups with Amazon Simple Storage Service (Amazon S3) – Enables simple HTTP based object storage for backup s and static assets like images and video Key components of an AWS web hosting architecture The following sections outline some of the key components of a web hosting architecture deployed in the AWS Cloud and explain how they differ from a traditional web hosting architecture Amazon Web Services Web Application Hosting in the AWS Cloud Page 7 Network management In the AWS Cloud the ability to segment your network from that of other customers enables a more secure and scalable architecture While security groups provide host level security (see the Host security section) Amazon Virtual Private Cloud (Amazon VPC) enables you to launch resources in a logically isolated and virtual network that you define Amazon VPC is a service tha t gives you full control over the details of your networking setup in AWS Examples of this control include creating internet subnets for web servers and private subnets with no internet access for your databases Amazon VPC enables you to create hybrid a rchitectures by using hardware virtual private networks (VPNs) and use the AWS Cloud as an extension of your own data center Amazon VPC also includes IPv6 support in addition to traditional IPv4 support for your network Content delivery When your web traffic is geo dispersed it’s not always feasible and certainly not cost effective to replicate your entire infrastructure across the globe A Content Delivery Network (CDN) provides you the ability to utilize its global network of edge locations to deliver a cached copy of web content such as videos webpages images and so on to your customers To reduce response time the CDN utilizes the nearest edge location to the customer or originating request location to reduce the response time Throughput is dramatically increased given that the web assets are delivered from cache For dynamic data many CDNs can be configured to retrieve data from the origin servers You can use CloudFront to deliver your website including dynamic static and streaming content using a global network of edge locations CloudFront automatically routes requests for your conte nt to the nearest edge location so content is delivered with the best possible performance CloudFront is optimized to work with other AWS services like Amazon S3 and Amazon Elastic Compute Cloud (Amazon EC2) CloudFront also works seamlessly with any origin server that is not an AWS origin server which stores the original definitive versions of your files Like other AWS services there are no contracts or monthly co mmitments for using CloudFront – you pay only for as much or as little content as you actually deliver through the service Amazon Web Services Web Application Hosting in the AWS Cloud Page 8 Additionally any existing solutions for edge caching in your web application infrastructure should work well in the AWS Cloud Mana ging public DNS Moving a web application to the AWS Cloud requires some Domain Name System (DNS) changes To help you manage DNS routing AWS provides Amazon Route 53 a highly available and scalable cloud DNS web service Route 53 is designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names such as “wwwexamplecom ” into numeric IP addresses such as 192021 that computers use to connect to each other Route 53 is fully compliant with IPv6 as well Host security In addition to inbound network traffi c filtering at the edge AWS also recommends web applications apply network traffic filtering at the host level Amazon EC2 provides a feature named security groups A security group is analogous to an inbound ne twork firewall for which you can specify the protocols ports and source IP ranges that are allowed to reach your EC2 instances You can assign one or more security groups to each EC2 instance Each security group allows appropriate traffic in to each i nstance Security groups can be configured so that only specific subnets IP addresses and resources have access to an EC2 instance Alternatively they can reference other security groups to limit access to EC2 instances that are in specific groups In the AWS web hosting architecture in Figure 3 the security group for the web server cluster might allow access only from the web layer Load Balancer and only over TCP on ports 80 and 443 (HTTP and HTTPS) The application server security group on the other hand might allow access only from the application layer Load Balancer In this model your support engineers would also need to access the EC2 instances what can be achieved with AWS Systems Manager Session Manager For a deeper discussion on security the AWS Cloud Security which contains security bulletins certification information and security whitepapers that explain the security capabilities of AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 9 Load balancing across clusters Hardware load balancers are a common network appliance used in traditional web application architectures AWS provides this capability through the Elastic Load Balancing (ELB) service ELB automa tically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers IP addresses AWS Lambda functions and virtual appliances It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones Elastic Load Balancing offers four types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant Finding other hosts and services In the traditional web hosting architecture most of your hosts have static IP addresses In the AWS Cloud most of your hosts have dynamic IP addresses Although every EC2 instance can have bot h public and private DNS entries and will be addressable over the internet the DNS entries and the IP addresses are assigned dynamically when you launch the instance They cannot be manually assigned Static IP addresses (Elastic IP addresses in AWS termi nology) can be assigned to running instances after they are launched You should use Elastic IP addresses for instances and services that require consistent endpoints such as primary databases central file servers and EC2 hosted load balancers Caching within the web application Inmemory application caches can reduce load on services and improve performance and scalability on the database tier by caching frequently used information Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud You can configure the in memory cache you create to automatically scale with load and to automatically replace failed nodes ElastiCache is protocol complian t with Memcached and Redis which simplifies cloud migrations for customers running these services on premises Database configuration backup and failover Many web applications contain some form of persistence usually in the form of a relational or non relational database AWS offers both relational and non relational Amazon Web Services Web Application Hosting in the AWS Cloud Page 10 database services Alternatively you can deploy your own database software on an EC2 instance The following table summarizes these options which are discuss ed in greater detail in this section Table 1 — Relational and non relational database solutions Relational database solutions Nonrelational database solutions Managed database service Amazon RDS for MySQL Oracle SQL Server MariaDB PostgreSQL Amazon Aurora Amazon DynamoDB Amazon Keyspaces Amazon Neptune Amazon QLDB Amazon Timestream Selfmanaged Hosting a relational database management system ( DBMS ) on an Amazon EC2 instance Hosting a non relational database solution on an EC2 instance Amazon RDS Amazon RDS gives you access to the capabilities of a familiar MySQL PostgreSQL Oracle and Microsoft SQL Server database engine The code applications and tools that yo u already use can be used with Amazon RDS Amazon RDS automatically patches the database software and backs up your database and it stores backups for a userdefined retention period It also supports point intime recovery You can benefit from the flexi bility of being able to scale the compute resources or storage capacity associated with your relational database instance by making a single API call Amazon RDS Multi AZ deployments increase your database availability and protect your database against unp lanned outages Amazon RDS Read Replicas provide read only replicas of your database so you can scale out beyond the capacity of a single database deployment for read heavy database workloads As with all AWS services no upfront investments are required and you pay only for the resources you use Amazon Web Services Web Application Hosting in the AWS Cloud Page 11 Hosting a relational database management system (RDBMS) on an Amazon EC2 instance In addition to the managed Amazon RDS offering you can install your choice of RDBMS (such as MySQL Oracle SQL Server or DB2) on an EC2 instance and manage it yourself AWS customers hosting a database on Amazon EC2 successfully use a variety of primary/standby and replication models including mirroring for read only copies and log shipping for always ready passive standbys When managing your own database software directly on Amazon EC2 you should also consider the availability of fault tolerant and persistent storage For this purpose we recommend that databases running on Amazon EC2 use Amazon Elastic Block Store (Amazon EBS) volumes which are similar to network attached storage For EC2 instances running a database you should place all database data and logs on EBS volumes These will remain available even if the database h ost fails This configuration allows for a simple failover scenario in which a new EC2 instance can be launched if a host fails and the existing EBS volumes can be attached to the new instance The database can then pick up where it left off EBS volumes automatically provide redundancy within the Availability Zone If the performance of a single EBS volume is not sufficient for your databases needs volumes can be striped to increase input/output operations per second ( IOPS ) performance for your database For demanding workloads you can also use EBS Provisioned IOPS where you specify the IOPS required If you use Amazon RDS the service manages its own storage so you can focus on managing your data Nonrelational databases In addition to support for r elational databases AWS also offers a number of managed nonrelational databases : • Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scala bility Using the AWS Management Console or the DynamoDB API you can scale capacity up or down without dow ntime or performance degradation Because DynamoDB handles the administrative burdens of operating and scaling distributed databases to AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 12 you don’t have to worry about hardware provisioning setup and configuration replication software patching or cl uster scaling • Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose built for JSON data management at scale fully managed and runs on AWS and enterprise ready with high durability • Amazon Keyspaces (for Apache Cassandra ) is a scalable highly available and managed Apache Cassandra compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today • Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built high performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency • Amazon Quantum Le dger Database (QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB can be used to track each and every application data change and maintains a complete and verifiable history of changes over time • Amazon Timestream is a fast scalable and serverless time series database service for IoT and operational applications that makes it ea sy to store and analyze trillions of events per day up to 1000 times faster and at as little as 1/10th the cost of relational databases Additionally you can use Amazon EC2 to host other non relational database technologies you may be working with Storage and backup of data and assets There are numerous options within the AWS Cloud for storing accessing and backing up your web application data and assets Amazon S3 provides a highly available and redundant object store S3 is a great storage solution for static objects such as images videos and other static media S3 also supports edge caching and streaming of these assets by interacting with CloudFront Amazon Web Services Web Application Hosting in the AWS Cloud Page 13 For atta ched file system like storage EC2 instances can have EBS volumes attached These act like mountable disks for running EC2 instances Amazon EBS is great for data that needs to be accessed as block storage and that requires persistence beyond the life of t he running instance such as database partitions and application logs In addition to having a lifetime that is independent of the EC2 instance you can take snapshots of EBS volumes and store them in S3 Because EBS snapshots only back up changes since th e previous snapshot more frequent snapshots can reduce snapshot times You can also use an EBS snapshot as a baseline for replicating data across multiple EBS volumes and attaching those volumes to other running instances EBS volumes can be as large as 1 6TB and multiple EBS volumes can be striped for even larger volumes or for increased input/output ( I/O) performance To maximize the performance of your I/O intensive applications you can use Provisioned IOPS volumes Provisioned IOPS volumes are designe d to meet the needs of I/O intensive workloads particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput You specify an IOPS rate when you create the volume and Amazon EBS provisions that rate for the lifetime of the volume Amazon EBS currently supports IOPS per volume ranging from maximum of 16000 (for all instance types) up to 64000 ( for instances built on Nitro System ) You can stripe multiple volumes together to deliver thousands of IOPS per instance to your application Apart from this for higher throughput and mission critical workloads requiring sub millisecond latency y ou can use io2 block express volume type which can support up to 256000 IOPS with a maximum storage capacity of 64TB Automatically scaling the fleet One of the key differences between the AWS Cloud architecture and the traditional hosting model is that A WS can automatically scale the web application fleet on demand to handle changes in traffic In the traditional hosting model traffic forecasting models are generally used to provision hosts ahead of projected traffic In AWS instances can be provisioned on the fly according to a set of triggers for scaling the fleet out and back in The Auto Scaling service can create capacity groups of servers that can grow or shrink on demand Auto Scaling also works directly with Amazon CloudWatch for metrics data Amazon Web Services Web Application Hosting in the AWS Cloud Page 14 and with Elastic Load Balancing to add and remove hosts for load distribution For example if the web servers are reporting greater than 80 percent CPU utilization over a period of time an additional web server could be quickly deployed and then automatically added to the load balancer for immediate inclusion in the load balancing rotation As shown in the AWS web hosting architecture model you can create multiple Auto Scaling groups for different layers of the architecture so that each layer can scale independently For example the web server Auto Scaling group might trigger scaling in and out in response to changes in network I/O whereas the application server Auto Scaling group might scale out and in according to CPU utilization You can set minimums and maximums to help ensure 24/7 availability and to cap the usage within a group Auto Scaling triggers can be set both to grow and to shrink the total fleet at a given layer to match resource utilizatio n to actual demand In addition to the Auto Scaling service you can scale Amazon EC2 fleets directly through the Amazon EC2 API which allows for launching terminating and inspecting instances Additional security features The number and sophistication of Distributed Denial of Service (DDoS) attacks are rising Traditionally these attacks are difficult to fend off They often end up being costly in both mitigation time and power spent as well as the opportunity cost from lost visits to your website dur ing the attack There are a number of AWS factors and services that can help you defend against such attacks One of them is the scale of the AWS network The AWS infrastructure is quite large and enables you to leverage our scale to optimize your defense Several services including Elastic Load Balancing Amazon CloudFront and Amazon Route 53 are effective at scaling your web application in response to a large increase in traffic The infrastructure protection services in particular help with your defense strategy : • AWS Shield is a managed DDoS protection service that helps safeguard against various forms of DDoS attack vectors The standard offering of AWS Shield is free and automatically active throughout your account Th is standard offering helps to defend against the most common network and transportation layer attacks In addition to this level the advanced offering grants higher levels of Amazon Web Services Web Application Hosting in the AWS Cloud Page 15 protection against your web application by providing you with near real time visibility into an ongoing attack as well as integrating at higher levels with the services mentioned earlier Additionally you get access to the AWS DDoS Response Team (DRT) to help mitigate large scale and sophisticated attacks against your resources • AWS WAF (Web Application Firewall) is designed to protect your web applications from attacks that can compromise availability or security or otherwise consume excessive resources AWS WAF works in line with CloudFr ont or Application Load Balancer along with your custom rules to defend against attacks such as cross site scripting SQL injection and DDoS As with most AWS services AWS WAF comes with a fully featured API that can help automate the creation and edit ing of rules for your AWS WAF instance as your security needs change • AWS Firewall Manager is a security management service which allows you to centrally configure and manage firewall rules across y our accounts and applications in AWS Organizations As new applications are created Firewall Manager makes it easy to bring new applications and resources into compliance by enforcing a common set of s ecurity rules Failover with AWS Another key advantage of AWS over traditional web hosting is the Availability Zones that give you easy access to redundant deployment locations Availability Zones are physically distinct locations that are engineered to be insulated from failures in other Availability Zones They provide inexpensive low latency network connectivity to other Availability Zones in the same AWS Region As the AWS web hosting architecture diagram shows AWS recommend s that you depl oy EC2 hosts across multiple Availability Zones to make your web application more fault tolerant It’s important to ensure that there are provisions for migrating single points of access across Availability Zones in the case of failure For example you s hould set up a database standby in a second Availability Zone so that the persistence of data remains consistent and highly available even during an unlikely failure scenario You can do this on Amazon EC2 or Amazon RDS with the click of a button Amazon Web Services Web Application Hosting in the AWS Cloud Page 16 While s ome architectural changes are often required when moving an existing web application to the AWS Cloud there are significant improvements to scalability reliability and cost effectiveness that make using the AWS Cloud well worth the effort The next sect ion discuss es those improvements Key considerations when using AWS for web hosting There are some key differences between the AWS Cloud and a traditional web application hosting model The previous section highlighted many of the key areas that you should consider when deploying a web application to the cloud This section points out some of the key architectural shifts that you need to consider when you bring any application into the cloud No more physical network appliances You cannot deploy physi cal network appliances in AWS For example firewalls routers and load balancers for your AWS applications can no longer reside on physical devices but must be replaced with software solutions There is a wide variety of enterprise quality software solu tions whether for load balancing or establishing a VPN connection This is not a limitation of what can be run on the AWS Cloud but it is an architectural change to your application if you use these devices today Firewalls everywhere Where you once had a simple demilitarized zone (DMZ ) and then open communications among your hosts in a traditional hosting model AWS enforces a more secure model in which every host is locked down One of the s teps in planning an AWS deployment is the analysis of traffic between hosts This analysis will guide decisions on exactly what ports need to be opened You can create security groups for each type of host in your architecture You can also create a large variety of simple and tiered security models to enable the minimum access among hosts within your architecture The use of network access control lists within Amazon VPC can help lock down your network at the subnet level Amazon Web Services Web Appli cation Hosting in the AWS Cloud Page 17 Consider the availability of multiple data centers Think of Availability Zones within an AWS Region as multiple data centers EC2 instances in different Availability Zones are both logically and physically separated and they provide an easy touse model for deploying your application across data centers for both high availability and reliability Amazon VPC as a Regional service enables you to leverage Availability Zones while keepi ng all of your resources in the same logical network Treat hosts as ephemeral and dynamic Probably the most important shift in how you might architect your AWS application is that Amazon EC2 hosts should be considered ephemeral and dynamic Any applicatio n built for the AWS Cloud should not assume that a host will always be available and should be designed with the knowledge that any data in the EC2 instant stores will be lost if an EC2 instance fails When a new host is brought up you shouldn’t make ass umptions about the IP address or location within an Availability Zone of the host Your configuration model must be flexible and your approach to bootstrapping a host must take the dynamic nature of the cloud into account These techniques are critical fo r building and running a highly scalable and fault tolerant application Consider containers and serverless This whitepaper primarily focuses on a more traditional web architecture However consider modernizing your web applications by moving to Containers and Serverless technologies leveraging services like AWS Fargate and AWS Lambda to enable you to abstracts away the use of virtual machines to perform compute tasks With serverless computing infrastructure management tasks like capacity provisioning and patching are handled by AWS so you can build mor e agile applications that allow you to innovate and respond to change faster Amazon Web Services Web Application Hosting in the AWS Cloud Page 18 Consider automated deployment • Amazon Lightsail is an easy touse virtual private server (VPS) that offers you everything needed to build an application or website plus a cost effective monthly plan Light sail is ideal for simpler workloads quick deployments and getting started on AWS It’s designed to help you start small and then scale as you grow • AWS Elastic Beanstalk is an easy touse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache NGINX Passenge r and IIS You can simply upload your code and Elastic Beanstalk automatically handles the deployment capacity provisioning load balancing auto matic scaling and application health monitoring At the same time you retain full control over the AWS res ources powering your application and can access the underlying resources at any time • AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web applicat ions and APIs at scale and with no prior infrastructure experience required Start with your source code or a container image App Runner automatically builds and deploys the web application and load balances traffic with encryption App Runner also scale s up or down automatically to meet your traffic needs • AWS Amplify is a set of tools and services that can be used together or on their own to help front end web and mobile developers build scalable full sta ck applications powered by AWS With Amplify you can configure app backends and connect your app in minutes deploy static web apps in a few clicks and easily manage app content outside the AWS Management C onsole Conclusion There are numerous architectural and conceptual considerations when you are contemplating migrating your web application to the AWS Cloud The benefits of having a cost effective highly scalable and fault tolerant infrastructure that grows with your business far outstrips the efforts of migrating to the AWS Cloud Amazon Web Services Web Application Hosting in the AWS Cloud Page 19 Contributors The following individuals and organizations contributed to this document: • Amir Khairalomoum Senior Solutions Architect AWS • Dinesh Subramani Senior Solutions Architect AWS • Jack Hemion Senior Solut ions Architect AWS • Jatin Joshi Cloud Support Engineer AWS • Jorge Fonseca Senior Solutions Architect AWS • Shinduri K S Solutions Architect AWS Further reading • Deploy Django based application onto Amazon LightSail • Deploying a hig h availability Drupal website to Elastic Beanstalk • Deploying a high availability PHP application to Elastic Beanstalk • Deploying a Nodejs application with DynamoDB to Elastic Beanstalk • Getting Started with Linux Web Applications in the AWS Clou d • Host a Static Website • Hosting a static website using Amazon S3 • Tutorial: Deploying an ASPNET core application with Elastic Beanstalk • Tutorial: How to deploy a NET sample application using Elastic Beanstalk Document version s Date Description August 20 2021 Multiple sections and diagrams updated with new services features and updated service limits Amazon Web Services Web Application Hosting i n the AWS Cloud Page 20 Date Description September 2019 Updated icon label for “Caching with ElastiCache” July 2017 Multiple sections added and updated for new services Updated diagrams for additional clarity and services Addition of VPC as the standard networking method in AWS in “Network Management” Adde d section on DDoS protection and mitigation in “Additional Security Features” Added a small section on serverless architectures for web hosting September 2012 Multiple sections updated to improve clarity Updated diagrams to use AWS icons Addition of “Managing Public DNS” section for detail on Amazon Route 53 “Finding Other Hosts and Services” section updated for clarity “Database Configuration Backup and Failover” section updated for clarity and DynamoDB “Storage and Backup of Data an d Assets” section expanded to cover EBS Provisioned IOPS volumes May 2010 First publication
|
General
|
consultant
|
Best Practices
|
WeDo_Telecom_RAID_Risk_Management_Solution_in_the_AWS_Cloud
|
WeDo Telecom RAID Risk Management Solution in the AWS Cloud June 2018 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 Revenue Assurance Overview 1 WeDo Revenue Assurance Platform Overview 3 Solution Capabilities 3 Functional Solution Architecture 9 WeDo Revenue Deployment on AWS Cloud 11 AWS Services for Deploying WeDo Revenue Assurance Solution 11 AWS Architecture Principles for Deploying WeDo Revenue Assurance Solution 15 WeDo Revenue Assurance Solution Deployment Architecture on AWS 16 Benefits of Deploying WeDo Revenue Assurance solution in the AWS Cloud 19 Conclusion 21 Contributors 21 Archived Abstract This whitepaper provides an architectural overview of how the WeDo Revenue Assurance Solution operates on the AWS Cloud It is written for executive architect and development teams that need to make a decision to deploy a revenue assurance solution for their consumer or enterprise business on the AWS Cloud ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 1 Introduction In an increasingly competitive market Communications Service Providers (CSPs) are being forced to quickly adapt to meet customer expectations and market trends by differentiating themselves from competition with innovative and attractive services and best inclass customer experience At the same time declining prices are creating pressure to reduce costs forcing CSPs to become leaner while continuing to deliver high quality service and run their business profitably By implementing an agile data processing environment that includes operationalizing auditing and optimizing processes these challenges can be met at a lower cost WeDo Technologies’ RAID pl atform is an end toend Risk Management software that lets CSPs focus on growing their businesses by putting risks under control The platform is available both on premises and in the cloud delivering advanced solutions that successfully support and assur e traditional services such as voice and data and also the release of next generation services supported by core telecom networks as they evolve towards Virtualization Cloud and 5G Risk Management Overview Revenue Assurance: Between delivering a servic e and collecting its revenues there is a huge amount of hidden risks that can arise and influence a communications service provider’s bottom line Service Providers on average lose 1 to 3 percent of their revenue because of operational shortcomings These values can be influenced by factors such as networks and service type geography carrier type and Revenue Assurance maturity level To ensure that the services provided to customers are actually being billed and collected accurately Service Provide rs need to implement an effective control system capable of enhancing their end toend Revenue Assurance processes that responds to their current and future needs RAID Revenue Assurance is a software tool specifically designed to tackle the critical chall enges across the entire revenue chain while increasing a Service Provider’s maturity level ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 2 With the deployment of RAID Service Provider’s gain additional insights during the identification monitoring and correlation of the root cause analysis of their revenue leakages to accelerate the process of capturing earnings previously lost Fraud Management: Communications Service Providers (CSPs) already feeling the impact of fraud across dedicated networks for voice and data traffic as well as converged netwo rks will now be facing additional fraud challenges associated with Next Generation Networks (NGN) Since NGNs are responsible for the provisioning of ground breaking services operators have been placed in the difficult position of dealing with a whole ra ft of unanticipated fraud scenarios Operators cannot tackle these challenges using conventional fraud management systems (FMS) because they were not built for today’s increasingly complex networks More suitable tools are needed to improve NGN fraud detec tion without abandoning previous network environments WeDo Technologies’ solution RAID Fraud Management addresses a whole range of fraud types across a wide variety of environments It can either be integrated into RAID’s Risk Management framework or us ed as a standalone solution to support fraud detection teams When combined RAID is able to create direct linkages between fraud and revenue assurance cases accelerating the prevention of future risks and threats The technology is powered by cross funct ional data feeds that are capable of producing interdepartmental alarms and flagged behaviors that ultimately can be consolidated into actionable reporting and capable of showing a 360º view of a company’s business performance RAID Fraud Management not o nly tackles current known fraud patterns but also protects CSPs against upcoming threats RAID Fraud Management can be tightly integrated into a CSP environment and is capable of interacting with numerous systems Business Assurance: More than just lookin g at the revenue value chain there is a set of business support processes within the CSPs that are critical to audit in order to ensure that a company keeps its costs under control Along these processes we can include order provisioning sales incentives or collections among others RAID Business Assurance near real time data integration and auditing capabilities make it possible to closely keep track of the accuracy of the business ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 3 and internal process leading to a cost effective management Today’s mana gement require to keep track of leading indicators like outsourcing control margin assurance financial control and resource management With RAID Business Assurance it is possible to collect data from all management support systems to ensure that a compa ny’s internal processes are aligned with the defined targets In order to help CSPs keep costs under control RAID Business Assurance is continually reading aggregating and validating data over the audited processes analyzing transactions and triggering c all for action when deviations are detected WeDo RAID Risk Management Platform Overview Solution Capabilities RAID is an all inone software that collects data across business applications and platforms to provide detailed monitoring of business activity to help improve corporate performance By providing the foundation for all your data integration RAID removes the burden of maintaining silos of data sources so you can run your operations safer and make better business decisions to assure manage and o ptimize your business RAID offers a modular approach to Risk Management with the following areas: ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 4 Additionally to the modular approach the Risk Management solution can be configured to provide with customized validations tailored for specific needs of particulars CSP Operations Revenue Assurance Provisioning Assurance : As CSPs scale their service portfolio technology and subscriber base their biggest challenge becomes the growing volume and complexity of orders which need to be provisioned during the activation and the deactivation of services and customers Main benefits may include: • Detect errors arising from manual or automatic provisioning actions; • Guarantee timely customer provisioning synchronization; • Support the launch of new products and services; • Reduce internal fraud; • Maximize revenues with no delays in service delivery; • Create a superior customer experience Usage Assurance : RAID’s Usage Assurance modules collect session and signaling data from multiple measuring points along the revenu e chain and reconcile it using historical cross system and threshold validation rules generating alarms that can be analyzed at detailed level by using RAID’s Advanced Case Management features Main benefits may include: • Guarantee that usage is accurate ly reflected in customer billing through control mechanisms that verify the flow of network events from the switch to their inclusion in the bill; • Drill down into multiple view levels of the networks’ xDR flows giving access to high level reports KPIs a nd aggregated views by service or revenue stream as well as detailed CDR level views for individual network elements; ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 5 • Frictionless data capturing from CSP’s multiple OSS/BSS systems and any additional endpoints Rating Validation : RAID Rating Validation module enables CSPs to control revenue leakage by efficiently tracking and correcting any underlying errors in the rating process This module enables the deployment of a revenue assurance solution which validates rate plans configuration rating charges bundles discounts and fees Main benefits may include: • Automated and easy touse validation process that checks if rated records are correctly calculated according to the CSP rate plans; • Matching tolerances and filters definition allow for control over w hich events should be validated and for the drilling down into calculations per independent CDR Billing Validation : RAID Billing Validation module is engineered to validate the accuracy of the customer invoicing process This module ensures customer bills are fully validated for total expenditure as well as for the total components of the invoice by running independent external audits and verification procedures of the operators’ billing mechanisms It compares the itemized invoices against the customer’ s services and contracts verifying billing data and customer invoice evolution according to the defined rules Through the definition of thresholds and matching them against historical data it can increase the accuracy of the billing cycle by looking at trend analysis Fraud Management RAID provides with pre built capabilities to monitor and detect the traditional technical fraud scenarios such as: Roaming Fraud By pass IRSF High Usage Subscription Fraud and Prepaid Fraud Some examples of controls i mplemented in Fraud Management are: Roaming Fraud controls: ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 6 • Evaluate traffic volume to identify high usage consumption or no to low traffic patterns; • Identify traffic to known fraudsters through black list analysis; • Apply specific rules and scores to visit ed and/or risky destination countries; • Detect and be alerted to call collisions and roaming bypass Bypass Fraud controls: • Evaluate traffic volume to identify high usage consumption to a large number of different destinations; • Identify call collision and call velocity above limit thresholds; • Audit traffic to identify equipment serving multiple subscribers within short periods of time; • Identify behaviors with deviations from expected usage or similar to known fraudsters IRSF controls: • Evaluate traffic volu me to specific international numbers to identify abnormal usage consumption; • Identify behaviors with deviations from expected usage; • Audit traffic to identify subscribers with multiple equipment and SIM replacements within short periods of time; • Identify n umbers with high similarity to known premium rate numbers High Usage Fraud controls: • Audit all the traffic (on net national off net international or visitors) to identify abnormal usage related to large duration calls high costs calls or high usage of SMS data or voice services Subscription Fraud controls: • Audit subscriptions list to identify multiple similar activations which may indicate fraudulent behavior; ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 7 • Audit new subscriptions with a high degree of similarity to known fraudsters Prepaid Fraud controls: • Audit all recharges to identify a high number of occurrences repeated recharges unexpected recharge amounts and usage of scratch cards already used or not yet sold in the market • Identity account abuses related to unauthorized airtime transfers unexpected changes in account balance and abnormal account type migrations Additionally to that RAID Risk Management Solution can be configured with advanced analytics such as machine learning predictive analysis non supervised models etc to broad and expend the reach of detection of suspicious activities Business Assurance Partner Incentives Assurance: help to keep your sales force engaged and effective to ensure fraud and customer acquisition costs are kept to a minimum Main benefits may includ e: • Assures that all relevant business data is received by Incentives System; • Monitors to find human and system errors that generate increased costs; • Assures that partner payments are correctly generated; • Monitors partner disputes to detect abuse including recurrent or rising conflict scenarios that generate large numbers of disputes high dispute dollar amounts and resolution times; • Detects fraud behavior that can damage a CSP’s image and high costs to fix if not detected earlier; • Manages partners relations with a CSP and provides the tools to improve sales force engagement; • Provides the tools to monitor the performance of incentives programs; ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 8 • Promotes pro active and effective problem analysis; • Centralizes within a single tool the m anagement and resolution of issues detected by this process and shares information between different operational and business teams; • Tracks and measures Incentive issues; • Measures the performance of CSP incentive processes and provides a set of out ofthebox performance reports Customer Collections Assurance : enables CSPs to monitor your customer credit scoring process along with your distinct collections and dunning strategies It also measures the performance of internal and external debt recovery age ncies Main benefits may include: • Evaluates the credit risk for each subscriber including tracking credit scores purchases and buying behavior over time; • Monitors subscribers for credit fraud; • Allows the collections team to evaluate the success of their c ollections strategy; • Validates the eligibility rules used to create and assign debt packages to Data Collection Systems or to legal agencies; • Provides a set of standard out ofthebox scoring collections and debt recovery measures and KPIs allowing the CSP to quickly evaluate the performance of their collections team; • Measures the efficiency and accuracy of debt recovery and the associated commission plans Order Handling Assurance : focuses on achieving efficient order management improving customer sati sfaction and loyalty by reducing delays and back orders enhancing order accuracy and making communication easier It validates all steps to guarantee order accuracy Main benefits may include: • Efficient order management with a minimum of delays and back orders; ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 9 • Improved customer experience through on time delivery leading to greater customer satisfaction and loyalty; • Improved order accuracy and communication among support teams; • Identify provisioning flaws; • Reduce order unfeasibility and identify potentia l backlog issues; • Reducing costs by ensuring efficient allocation of resources Functional Solution Architecture The solution capabilities described previously fit into a Functional Solution Architecture that can be divided into 5 major continuous steps: The first step is Collect : To gain valuable insight into all the business applications data the solution uses WeDo ’s Smart Data Stream which is able to collect data stored in a great variety of file based format s relational database s and in Hadoop Thi s proven dat a integration solution has particularly developed for blending and enrichi ng vast volumes of telecom data The second step is to Monitor : While collecting data from multiple sources the solution monitors it supporting full alarm traceability f rom the instant an alarm is triggered to the validation and application of business rules The WeDo’s Unified Validation Engine keeps th ese business rules which are provided “out of ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 10 thebox” but can also be easily configured and managed through a visually intuitive rule designer that supports specific user profiles Since the ETL is fully integrated with this rules engine the solution is able to deliver accurate and auditable results The third step is to Notify : The solution provides dashboards and repor ts that can quickly provide understanding of what needs attention which alerts need to be tracked and which tasks require follow up This visual experience is able to combine data sources add filters and drill down into specific information with just a f ew clicks either users accessing it through a desktop PC tablet or smartphone The fourth step is to Discover : WeDo’s Data Model and analytics tools enable business analysts using self service tools to explore and visualize data and investigate deeper drilling down for root cause analysis and for gain ing real business insight Users are able to access the business logic used and have instant access to data from internal and external sources stored in relational databases Hadoop and NoSQL systems The fourth step is to Act: With WeDo ’s Adaptive Case Management it’s easy to allocate tasks across the business and quickly investigate and analyze cases for faster more accurate decisions It also enables teams to gather supporting evidence and compile ad hoc or standard reports The Adaptive Case Management also allows for easy t racking of all case activity and history through a case timeline It also simplifies defining SLAs an d escalation paths ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 11 WeDo RAID Deployment on AWS Cloud AWS Services for Deploying WeDo RAID Risk Management Solution This section describes the AWS infrastructure and services that you need to run the WeDo RAID Risk Management platform on AWS Regions and Availability Zones Each AWS Region is a separate geographic area that is isolated from the other Regions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resources aren't replicated across Regions unless you do so specifically An AWS account provides multiple Regions so that you can l aunch your applications in locations that meet your requirements For example you might want to launch your applications in Europe to be closer to your European customers or to meet regulatory requirements Each Region has multiple isolated locations kno wn as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment aren’t shared across Availability Zones Each Availability Zone is isolated but Availability Zones within a Region are connected through low latency links For more information about Regions and Availability Zones see Regions and Availability Zones in the Amazon EC2 User Guide for Linux Instances 1 Amazon Route 53 Amazon Route 53 provides highly available and scalable Domain Name System (DNS) domain name registration and health chec king web services It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names like examplecom into the numeric IP addresses such as 192021 that comput ers use to connect to each other You can combine your DNS with health checking services to route traffic to healthy endpoints or to independently monitor and/or alarm on endpoints You can also purchase and ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 12 manage domain names such as examplecom and auto matically configure DNS settings for your domains Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances Elastic Load Balancing load balancers or Amazon S3 buckets – and can also be used to route user s to infrastructure outside of AWS Amazon Elastic Compute Cloud Amazon Elastic Compute Cloud ( Amazon EC2 ) is a web service that provides resizable compute capacity in the cloud that is billed by the hour You can run virtual machines (EC2 instances) ranging in size from 1 vCPU and 1 GB memory to 128 vCPU and 2 TB memory You have a choice of operating systems including Windows Server 2008/2012 Oracle Linux Red Hat Enterprise Linux and SUSE Linux Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple EC2 instances in the cloud It enables y ou to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic You can use Elastic Load Balancing for load balancing web server traffic Amazon Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you fro m component failure thereby offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image An Amazon Machine Image (AMI) is a packaged up environment that provides the information required to launch your EC2 instance You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need For more information on AMIs see the Documentation 2 Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of AMIs so that we can boot them when you ask us to do so ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 13 Amazon Simple Storage Service Amazon Simple Storage Service (Amazon S3) prov ides developers and IT teams with secure durable highly scalable object storage It provides a simple web services interface that you can use to store and retrieve any amount of data at any time from anywhere on the web With Amazon S3 you pay only fo r the storage you actually use There is no minimum fee and no setup cost Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways You can leverage multi ple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and then you can leverage the AWS Cloud as an extension of your corporate data center Amazon Relational Database Services Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up operate and scale a r elational database in the cloud It provides cost efficient resizable capacity for an industry standard relational database and manages common databas e administration tasks AWS CloudTrail With AWS CloudTrail you can monitor your AWS deployments in the cloud3 by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command line tools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address the calls were made from and when the calls occurred You can integrate CloudTrail into applications using the API automate trail creat ion for your organization check the status of your trails and control how administrators turn CloudTrail logging on and off ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 14 AWS CloudFormation AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resourc es in your cloud environment CloudFormation allows you to use a simple text file to model and provision in an automated and secure manner all the resources needed for your applications across all regions and accounts This file serves as the single sour ce of truth for your cloud environment AWS CloudFormation is available at no additional charge and you pay only for the AWS resources needed to run your applications AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated network co nnection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment which in many cases can reduce your network costs increase bandwidth t hroughput and provide a more consistent network experience than internet based connections AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today4 Security on AWS is similar to security in your on premises data center but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help s ecure your systems and data in the cloud To learn more about AWS Security see the AWS Security Center 5 AWS Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide you with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center 6 ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 15 AWS Architecture Principles for Deploying WeDo Revenue Assurance Solution The WeDo Revenue Assu rance Solution (also designated as RAID) at the technical level is mainly composed of: a RAID Portal module one of more RAID Revenue Assurance Solution (RAS) modules a common database The RAID Portal module running on an Amazon EC2 compute instance provides all the web based interaction needed to configure visualize and manage all the RAID Revenue Assurance Solution functionality The RAID Revenue Assurance Solution modules running on Amazon EC2 compute instances specifically handle each of the Revenue Assurance functionalities Both the RAID Portal as the RAID Revenue Assurance Solution modules use AWS based database s (like the AWS RDS ) Using these components the bare minimum simplified architecture for a basic environment could be compose d of a single EC2 instance running the RAID Portal and a RAS module (Revenue Assurance Solution module): Since more than one RAS module and other WeDo RAID based solutions can be combined on the same environment the architecture can grow to have multiple RAS module s running on multiple EC2 instances for which access is centralized through the RAID Portal EC2 instance: ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 16 The RAID P ortal offers a single user transparent point of entry for a ll RAID modules that become seamless to all team users WeDo RAID Risk Management Solution Deployment Architecture on AWS To deploy RAID in AWS WeDo recommends having at least three environments: Development Where all the solution configuration will be developed through the web based GUI including incremental additions (eg Agile team) Testing Where the solution configuration and any change goes through quality checks Production Where the solution configuration actually monitors and controls real data and endusers interact with the sol ution through the web based GUI Depending on each case requirements a simple or more complex AWS architecture can be required WeDo recommends : ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 17 Each environment should have its own separate AWS VPCs for isolation Each environment should be only accessi ble through a VPN service ensuring private connectivity between the VPC and customer networks AWS Security Groups should enforce restricted access to the several architecture components (RAID Portal RAID RAS modules and Database) Keep in mind that end users only need access to the RAID Portal and pretty much all of the solution functionality is accessible through web based GUI The EC2 compute instances should be EBC backed The production architecture should use multi availability zones for fault toler ance Use AWS RDS instances for the database (using Multi AZ for production architectures) Use AWS S3 service for storing shared data files between RAID RAS modules log files and other operational related files (with VPC endpoints for private access) Use AWS CloudWatch for performance monitoring of each architecture component Deploy a private domain using Amazon Route 53 and configure the inter connection between architecture components by host names that should be equivalent between environments ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 18 Given some of the recommendations a production architecture might look like this: Sample RAID RAS Platform Deployment Dimensioning The sizing of a RAID RAS Platform Deployment depends on several variables driving both AWS EC2 compute instance types and volume storage requirements depending on your specific customer needs For example storage is largely dependent on data retention time period both for legal as well as business requirements We recommend that you contact your WeDo team for a more accurate envir onment architecture dimensioning The following is a simple sample dimensioning assuming: Generic sizing parameters Value Monitored Control Points 20 Number of concurrent users 10 Amount of xDRs per day 10 million ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 19 Number of subscribers 2 million Retention period Up to 90 days Which would result on a sample dimensioning for AWS: Component AWS Instance Type OS or DB Volume Storage Type Volume Type Volume Storage (GB) RAID Portal + RAID RAS m4large RHEL EBS General Purpose (SSD) 55 Database dbm4large RDS DB (Oracle 12c EE) EBS General Purpose (SSD) 529 The numbers and sizing listed above were meant to demonstrate a general approach to dimension the amount of resources required in AWS Cloud environment and may change according to the CSP n eeds and requirements Benefits of Deploying WeDo RAID Risk Management solution in the AWS Cloud There are many benefits of deploying the WeDo RAID Risk Management solution on AWS : Lower total cost of ownership – In an on premises environment it is typically necessary to pay for hardware hardware support costs virtualization licensing and support and data center costs including floor space electricity etc These costs can be eliminated or dramatically reduced by moving to AWS Benefit s include economies of scale and efficiencies provided by AWS You only pay for the compute storage and other resources that you use Cost savings for nonproduction environments – WeDo Revenue Assurance on AWS enables you to shut down nonproduction environme nts when they are not being used in order to save costs For example if a ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 20 development environment is used for only 40 hours a week (8 hours a day 5 days a week) you would only pay for 40 hours of Amazon EC2 compute charges as opposed to 168 hours based on 24/7 usage in an on premises environment This represent s up to a 75% savings Replace CapEx with OpEx – You can implement a n RAID BSS solution or project on AWS without any upfront cost or commitment for compute storage or network infrastructure Unlimited environments – An on premises environment usually provides a limited set of environments to work with —provisioning additional environments can take a long time or might not be possible With AWS you can create virtually any number of new environ ments in minutes as required In addition you can create separate environment s for each major project thereby enabling each of your team s to work independently with the resources they need Teams can subsequently converge in a common integration environm ent when they are ready At the conclusion of a project you can terminate the environment and cease payment Right size anytime – Customers often over size onpremises environments for the initial phases of a project but are subsequently unable to cope with growth in later phases With AWS you can scale your compute usage up or down at any time You pay only for the individual services you need for as long as you use them In addition you can change instance sizes in minutes through the AWS Management Console the AWS Application Programming Interface (API) or Command Line Interface (CLI) Low cost disaster recovery – You can build lowcost standby disaster recovery environments for existing deployments Costs are incurred for the durat ion of any outage that occurs Ability to test application performance – Although performance testing is recommended prior to any major change to an RAID BSS solution environment most customers only performance test their RAID BSS application during the initial launch in the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of environment required for performance testing AWS minimize s the risk of discovering performance issues later in production You can create a n AWS Cloud ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 21 environment easily and quickly just for the duration of the performance test and only use it when needed You are charged only for the hours the environment is used Simple integration from RAID to AWS Cloud for analytics and machine learning – RAID platform offers rich product and service management capabilities which can be integrated with AWS Cloud Analytics for use cases such as subscriber customer and usage analytics These can then be used for various loyal ty and retention programs leveraging machine learning models on AWS Cloud using services like Amazon SageMaker No end of life for hardware or platform – All hardware platforms have endoflife dates at which point the hardware is no longer supported an d you are forced to purchase new hardware again AWS requires only a simpl e upgrade of your platform instances to new AWS instance types (via a single click ) without incurring any cost Conclusion RAID is de platform developed by WeDo Technologies for Risk Management Solution and provides out ofthebox capabilities to monitor the CSPs revenue and cost chains as well as detect fraud threats to support the organization strive in operational efficiency RAID also includes advanced analytics capabilities implementing machine learning predictive analysis and non supervised models to capture suspicious activities that wouldn’t be possible to detect otherwise using traditional supervised monitoring RAID is AWS Cloud ready and have many success cases where th e CSPs can leverage all the benefits AWS offers on having applications deployed in the cloud including security & compliance making it possible to handle sensitive customer information required to furth er operate the Risk Management practice Contributors The following individuals and organizations contributed to this document: Nuno Miguel Aguiar Team Lead – Professional Services WeDo Technologies ArchivedAmazon Web Services – WeDo Revenue Assurance Solution in AWS Cloud Page 22 Andre Thomaz Engagement Manager – Business Consulting WeDo Technologies Robin Harwani Strategic Partner So lution s Lead – Telecoms Amazon Web Services About WeDo Technologies Founded in 2001 is the market leader in Revenue Assurance and Fraud Management software solutions to Telecom Media and Technology organizations worldwide WeDo Technologies provides so ftware and expert consultancy across +105 countries through a +600 network of highly skilled professional experts present in the US Europe Asia Pacific Middle East Africa Central and South America WeDo Technologies’ software analyzes large quantiti es of data allowing to monitor control manage and optimize processes ensuring revenue protection and risk mitigation With over 180 customers including some of the world’s leading blue chip companies – WeDo Technologies has long been recognized as the constant innovator in assuring the success of its customers along a journey of continuous transformation For more infor mation please visit http://wwwwedotechnologiescom/ 1 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 2 https://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 3 https://awsamazoncom/what iscloud computing/ 4 https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 5 https://awsamazoncom/security/ 6 https://awsamazoncom/compliance/ Notes Archived
|
General
|
consultant
|
Best Practices
|
WordPress_Best_Practices_on_AWS
|
Best Practices for WordPress on AWS Reference architecture for scalable WordPress powered websites First Published December 2014 Updated October 19 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Simple deployment 1 Considerations 1 Available approaches 1 Amazon Lightsail 2 Improving performance and cost efficiency 4 Accelerating content delivery 4 Database caching 7 Bytecode caching 7 Elastic deployment 8 Reference architecture 8 Architecture components 9 Scaling the web tier 9 Stateless web tier 11 WordPress high availability by Bitnami on AWS Quick Starts 14 Conclusion 16 Contributors 16 Document revisions 16 Appendix A: Cl oudFront configuration 17 Origins and behaviors 17 CloudFront distribution creation 17 Appendix B: Plugins installation and configuration 20 AWS for WordPress plugin 20 Static content configuration 26 Appendix C: Backup and recovery 29 Appendix D: Deploying new plugins and themes 31 Abstract This whitepaper provides system administrators with specific guidance on how to get started with WordPress on A mazon Web Services (AWS) and how to improve both the cost efficiency of the deployment and the end user experience It also outlines a reference architecture that addresses common scalability and high availability requirements Amazon Web Services Best Practices for WordPres s on AWS Page 1 Introduction WordPress is an open source blogging tool and content management system (CMS) based on PHP and MySQL that is used to power anything from personal blogs to high traffic websites When the first version of WordPress was released in 2003 it was not built with modern elastic and scalable cloud based infrastructures in mind Through the work of the WordPress community and the release of various WordPress modules the capabilities of this CMS solution are constantly expanding Today it is possible to build a WordPress architecture that takes advantage of many of the benefits of the AWS Cloud Simple deployment For low traffic blogs or websites without strict high availability requirements a simple deployment of a single serve r might be suitable This deployment isn’t the most resilient or scalable architecture but it is the quickest and most economical way to get your website up and running Considerations This discussion starts with a single web server deployment There may be occasions when you outgrow it for example: • The virtual machine that your WordPress website is deployed on is a single point of failure A problem with this instance cause s a loss of service for your website • Scaling resources to improve performance can only be achieved by “vertical scaling ;” that is by increasing the size of the virtual machine running your WordPress website Available approaches AWS has a number of different options for provisioning virtual machines There are three main ways to host your own WordPress website on AWS: • Amazon Lightsail • Amazon Elastic Compute Cloud (Amazon EC2) • AWS Marketplace Amazon Web Services Best Practices for WordPres s on AWS Page 2 Amazon Lightsail is a service that enable s you to quickly launch a virtual private server (a Ligh tsail instance) to host a WordPress website Lightsail is the easiest way to get started if you don’t need highly configurable instance types or access to advanced networking features Amazon EC2 is a web service that provides resizable compute capacity so you can launch a virtual server within minutes Amazon EC2 provides more configuration and management options than Lightsail which is desirable in more advanced architectures You have administrative access to your EC2 instances and can install any software packages you choose including WordPress AWS Marketplace is an online store where you can find bu y and quickly deploy software that runs on AWS You can use oneclick deployment to launch preconfigured WordPress images directly to Amazon EC2 in your own AWS account in just a few minutes There are a number of AWS Marketplace vendors offering ready torun WordPress instances This whitepaper cover s the Lightsail option as the recommended implementation for a single server WordPress website Amazon Lightsail Lightsail is the easiest way to get started on AWS for developers small businesses students and other users who need a simple virtual private server (VPS) solution The service abstracts many of the more complex elements of infrastructure management away from the user It is therefore an ideal starting point if you have less infrastructure experience or when you need to focus on running your website and a simplified product is sufficient for your needs With Amazon Lightsail you can choose Windows or Linux/Unix operating systems and popular web applications including WordPr ess and deploy these with a single click from preconfigured templates As your needs grow you have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database object storage caching and content distribution se rvices Selecting an Amazon Lightsail pricing plan A Lightsail plan defines the monthly cost of the Lightsail resources you use to host your WordPress website There are a number of plans available to co ver a variety of use Amazon Web Services Best Practices for WordPres s on AWS Page 3 cases with varying levels of CPU resource memory solid state drive (SSD) storage and data transfer If your website is complex you may need a larger instance with more resources You can achieve this by migrating your server to a larger plan using the web console or as described in the Amazon Lightsail CLI documentation Installing WordPress Lightsail provides templates for commonly used applications such as WordPress This template is a great starting point for running your own WordPress website as it comes preinstalled with most of the software you need You can install additional software or customize the software configuration by using the in browser terminal or your own SSH client or via the WordPress administration web i nterface Amazon Lightsail has a partnership with GoDaddy Pro Sites product to help WordPress customers easily manage their instances for free Lightsail WordPress virtual servers are preconfigured and optimized for fast performance and security making it easy to get your WordPress site up and running in no time Customers running multiple WordPress instances find it challenging and time consuming to update maintain and manage all of their sites With this integration you can easily manage your multiple WordPress instances in minutes with only a few clicks For more information about managing WordPress on Lightsail refer to Gettin g started using WordPress from your Amazon Lightsail instance Once you are finished customizing your WordPress website AWS recommend s that you take a snapshot of your instance A snapshot is a way to create a backup image of your Lightsail instance It is a copy of the system disk and also stores the original machine configuration (that is memory CPU disk size and data transfe r rate) Snapshots can be used to revert to a known good configuration after a bad deployment or upgrade This snapshot enable s you to recover your server if needed but also to launch new instances with the same customizations Recovering from failure A single web server is a single point of failure so you must ensure that your website data is backed up The snapshot mechanism described earlier can also be used for this purpose To recover from failure you can restore a new instance from your m ost recent Amazon Web Services Best Practices for WordPres s on AWS Page 4 snapshot To reduce the amount of data that could be lost during a restore your snapshots must be as recent as possible To minimize the potential for data loss ensure that snapshots are taken on a regular basis You can schedule automatic sna pshots of your Lightsail Linux/Unix instances For instructions refer to Enabling or disabling automatic snapshots for instances or disks in Amazon Lightsail AWS recommend s that you use a static IP —a fixed public IP address that is dedicated to your Lightsail account If you need to replace your instance with another one you can reassign the static IP to the new instance In this way you don’t have to reconfigure any external systems (such as DNS records) to point to a new IP address every time you want to replace your instance Improving performance and cost efficiency You may eventually outgrow your single server deployment In this case you may need to consider options for improving your website’s performance Before migrating to a multi server scalable deployment (discuss ed later in this white paper ) there are a number of performance and cost efficiencies you can apply These are good practices that you should follow anyway even if you do move to a multi server architecture The following sections introduce a number of options that can improve aspects of your WordPress website’s performance and scalability Some can be applied to a single server deployment whereas others take advantage of the scalability of multiple servers Many of those modifications require the use of one or more WordPress plugins Although various options are available W3 Total Cache is a popular choice that combines many of those modifications in a single plugin Accelerating content delivery Any WordPress website needs to deliver a mix of static and dynamic content Static content includes images JavaScript files or style sheets Dynamic content includes anything generated on the server side using the WordPress PHP code ; for example elements of your site that are generated from the database or personalized to each viewer An important aspect of the end user experience is the network latency involved when delivering the previous content to users around the world Accelerating the delivery of the previous content improve s the end user experience especially users geographically Amazon Web Services Best Practices for WordPres s on AWS Page 5 spread across the globe This can be achieved with a Content Delivery Network (CDN) such as Amazon CloudFront Amazon CloudFront is a web service that provi des an easy and cost effective way to distribute content with low latency and high data transfer speeds through multiple edge locations across the globe Viewer requests are automatically routed to a suitable CloudFront edge location to lower the latency If the content can be cached (for a few seconds minutes or even days) and is already stored in a particular edge location CloudFront delivers it immediately If the content should not be cached has expired or isn’t currently in that edge location CloudFront retrieves content from one or more sources of truth referred to as the origin(s) (in this case the Lightsail instance) in the CloudFront configuration This retrieval takes place over optimized network connections which work to speed up the delivery of content on your website Apart from improving the end user experience the model discussed also reduces the load on your origin servers and has the potential to create s ignificant cost savings Static content offload This includes CSS JavaScript and image files —either those that are part of your WordPress themes or those media files uploaded by the content administrators All these files can be stored in Amazon Simple S torage Service (Amazon S3) using a plugin such as W3 Total Cache and served to users in a scalable and highly available manner Amazon S3 offers a highly scalable reliable and low latency data storage infrastruc ture at low cost which is accessible via REST APIs Amazon S3 redundantly stores your objects not only on multiple devices but also across multiple facilities in an AWS Region providing exceptionally high levels of durability This has the positive sid e effect of offloading this workload from your Lightsail instance and letting it focus on dynamic content generation This reduces the load on the server and is an important step towards creating a stateless architecture (a prerequisite before implementing automatic scaling ) You can subsequently configure Amazon S3 as an origin for CloudFront to improve delivery of those static assets to users around the world Although WordPress isn’t integrated with Amazon S3 and CloudFront out of the box a variety of plugins add support for these services (for example W3 Total Cache) Amazon Web Services Best Practices for WordPres s on AWS Page 6 Dynamic content Dynamic content includes the output of server side WordPress PHP scripts Dynamic content can also be served via CloudFront by configuring the WordPress websit e as an origin Since dynamic content include s personalized content you need to configure CloudFront to forward certain HTTP cookies and HTTP headers as part of a request to your custom origin server CloudFront uses the forwarded cookie values as part of the key that identifies a unique object in its cache To ensure that you maximize the caching efficiency configure CloudFront to forward only those HTTP cookies and HTTP headers that actually vary the content (not cookies that are only used on the client side or by thirdparty applications for example for web analytics) Whole website delivery via Amazon CloudFront The preceding figure includes two origins: one for static content and another for dynamic content For implementation details refer to Appendix A: CloudFront configuration and Appendix B: Plugins insta llation and configuration CloudFront uses standard cache control headers to identify if and for how long it should cache specific HTTP responses The same cache control headers are also used by web browsers to decide when and for how long to cache content locally for a more optimal end user experience (for example a css file that is already downloaded will not be redownloaded every time a returning visitor views a page) You can configure cache control headers on the web server level (for example via htaccess files or modifications of the httpdconf file) or install a WordPress plugin (for example W3 Total Cache) to dictate how those headers are set for both static and dynamic content Amazon Web Services Best Practices for WordPres s on AWS Page 7 Database caching Database caching can significantly reduce latency and increase throughput for read heavy application workloads like WordPress Application performance is improved by storing frequently accessed pieces of data in memory for low latency access (for example the results of input/output ( I/O)intensive databa se queries) When a large percentage of the queries is served from the cache the number of queries that need to hit the database is reduced resulting in a lower cost associated with running the database Although WordPress has limited caching capabilitie s out ofthebox a variety of plugins support integration with Memcached a widely adopted memory object caching system The W3 Total Cache plugin is a good example In the simplest scenarios you install Memcached on your web server and capture the result as a new snapshot In this case you are responsible for the administrative tasks associated with running a cache Another option is to take advantage of a managed service such as Amazon ElastiCache and avoid that operational burden ElastiCache makes it easy to deploy operate and scale a distributed in memory cache in the cloud You can find information about how to connect to your ElastiCache cluster nodes in the Amazon ElastiCache documentation If you are using Lightsail and wish to access an ElastiCache cluster in your AWS account privately you can do so by usin g VPC peering For instructions to enable VPC peering refer to Set up Amazon VPC peering to work with AWS resources outside of Amazon Lightsail Bytecode caching Each time a PHP script is run it gets parsed and compiled By using a PHP bytecode cache the output of the PHP compilation is stored in RAM so the same script doesn’t have to be compiled again and again This reduces the overhead related to running PHP scripts resulting in better performance and lower CPU requirements A bytecode cache can be installed on any Lightsail instance that hosts WordPress and can greatly reduce its load For PHP 55 and later AWS recommend s the use of OPcache a bundled extension with that PHP version Note that OPcache is enabled by default in the Bitnami WordPress Lightsail template so no further action is required Amazon Web Services Best Practices for WordPres s on AWS Page 8 Elastic deploymen t There are many scenarios where a single server deployment may not be sufficient for your website In these situations you need a multi server scalable architecture Reference architecture The Hosting WordPress on AWS reference architecture available on GitHub outlines best practices for deploying WordPress on AWS and includes a set of AWS CloudFormation templates to get you up and running quickly The following architecture is based on tha t reference architecture The rest of this section review s the reasons behind the architectural choices The based AMI in the GitHub was changed from Amazon Linux1 to Amazon Linux2 in July 2021 However deployment templates at S3 were not changed yet It is recommended to use templates at GitHub if there is an issue to deploy the reference architecture with templates at S3 Reference architecture for hosting WordPress on AWS Amazon Web Services Best Practices for WordPres s on AWS Page 9 Architecture components The preceding reference architecture illustrates a complete best practice deployment for a WordPress website on AWS • It starts with edge caching in Amazon CloudFront (1) to cache content close to end users for faster delivery • CloudFront pulls static content from an S3 bucket (2) and dynamic content from an Application Load Balancer (4) in front of the web instances • The web instances run in an Auto Scaling group of Amazon EC2 instances (6) • An ElastiCache cluster (7) caches frequently queried data to speed up responses • An Amazon Aurora MySQL instance (8) hosts the WordPress database • The WordPress EC2 instances access s hared WordPress data on an Amazon EFS file system via an EFS Mount Target (9) in each Availability Zone • An Internet Gateway (3) enable s communication between resources in your VPC and the internet • NAT Gateways (5) in each Availability Zone enable EC2 ins tances in private subnets (App and Data) to access the internet Within the Amazon VPC there exist two types of subnets: public ( Public Subnet ) and private ( App Subnet and Data Subnet ) Resources deployed into the public subnets will receive a public IP address and will be publicly visible on the internet The Application Load Balancer (4) and a bastion host for administration are deployed here Resources deployed into the private subnets receive only a pri vate IP address and are not publicly visible on the internet improving the security of those resources The WordPress web server instances (6) ElastiCache cluster instances (7) Aurora MySQL database instances (8) and EFS Mount Targets (9) are all deplo yed in private subnets The remainder of this section covers each of these considerations in more detail Scaling the web tier To evolve your single server architecture into a multi server scalable architecture you must use five key components: Amazon Web Services Best Practices for WordPres s on AWS Page 10 • Amazon EC2 instances • Amazon Machine Images (AMIs) • Load balancers • Automatic scaling • Health checks AWS provides a wide variety of EC2 instance types so you can choose the best server configuration for both performance and cost Generally speaking the compute optimiz ed (for example C4) instance type may be a good choice for a WordPress web server You can deploy your instances across multiple Availability Zones within a n AWS Region to increase the reliability of the overall architecture Because you have complete con trol of your EC2 instance you can log in with root access to install and configure all of the software components required to run a WordPress website After you are done you can save that configuration as an AMI which you can use to launch new instances with all the customizations that you've made To distribute end user requests to multiple web server nodes you need a load balancing solution AWS provides this capability through Elastic Load Balancing a highly available service that distributes traffic to multiple EC2 instances Because your website is serving content to your users via HTTP or HTTPS we recommend that you make use of the Application Load Balancer an application layer load balancer with content routing and the ability to run multiple WordPress websites on different domains if required Elastic Load Balancing supports distribution of requests across multiple Availability Zones within an AWS Region You can also configure a health check so that the Application Load Balancer automatically stops sending traffic to individual instances that have failed (for example due to a hardware problem or software crash) AWS recommend s using the WordPress admin login page (/wploginphp ) for the health check because this page confirm s both that the web server is running and that the web server is confi gured to serve PHP files correctly You may choose to build a custom health check page that checks other dependent resources such as database and cache resources For more information refer to Health checks for your target groups in the Application Load Balancer Guide Amazon Web Services Best Practices for WordPres s on AWS Page 11 Elasticity is a key characteristic of the AWS Cloud You can launch more compute capacity (for example web servers) when yo u need it and run less when you don't AWS Auto Scaling is an AWS service that helps you automate this provisioning to scale your Amazon EC2 capacity up or down according to conditions you define with no need for manual intervention You can configure AWS Auto Scaling so that the number of EC2 instances you’re using increases seamlessly during demand spikes to maintain performance and decreases automatically when traffic diminishes so as to minimize costs Elastic Load Balancing also supports dynamic addition and removal of Amazon EC2 hosts from the load balancing rotation Elastic Load Balancing itself also dynamically increases and decreases the load balancing capacity to adjust to traffic demands with no manual intervention Stateless web tier To take advantage of multiple web servers in an automatic scaling configuration your web tier must be stateless A stateless application is one that needs no knowledge of previous interactions and stores no session information In the case of WordPress this means that all end users receive the same response regardless of which web server processed their request A stateless application can scale horizontally since any request can be serviced by any of the a vailable compute resources (web server instances) When that capacity is no longer required any individual resource can be safely terminated (after running tasks have been drained) Those resources do not need to be aware of the presence of their peers —all that is required is a way to distribute the workload to them When it comes to user session data storage the WordPress core is completely stateless because it relies on cookies that are stored in the client’s web browser Session storage isn’t a concern unless you have installed any custom code (for example a WordPress plugin) that instead relies on native PHP sessions However WordPress was originally designed to run on a single server As a result it stores some data on the server’s local file system When running WordPress in a multi server configuration this creates a problem because there is inconsistency across web servers For example if a user uploads a new image it is only stored on one of the servers This demonstrates why we need to improve the default WordPress running configuration to move important data to shared storage The best practice architecture Amazon Web Services Best Practices for WordPres s on AWS Page 12 has a database as a separate layer outside the web server and makes use of shared storage to store user uploads themes and plugin s Shared storage (Amazon S3 and Amazon EFS) By default WordPress stores user uploads on the local file system and so isn’t stateless Therefore you need to move the WordPress installation and all user customizations (such as configuration plugins them es and user generated uploads) into a shared data platform to help reduce load on the web servers and to make the web tier stateless Amazon Elastic File System (Amazon EFS) provides scalable network fil e systems for use with EC2 instances Amazon EFS file systems are distributed across an unconstrained number of storage servers enabling file systems to grow elastically and enabling massively parallel access from EC2 instances The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers By moving the entire WordPress installation directory onto an EFS file system and mounting it into each of your EC2 instances when they boot your WordPress site and all its data is automatically stored on a distributed file system that isn’t dependent on any one EC2 instance making your web tier completely stateless The benefit of this architecture is that you don’t need to install plugins and themes on each new insta nce launch and you can significantly speed up the installation and recovery of WordPress instances It is also easier to deploy changes to plugins and themes in WordPress as outlined in the Deployment considerations section of this document To ensure optimal performance of your website when running from an EFS file system check the recommended configuration settings for Amazon EFS and OPcache on the AWS Reference Architecture for WordPress You also have the option to offload all static assets such as image CSS and JavaScript files to an S3 bucket with CloudFront caching in front The mechanism for doing this in a multi server architecture is exactly the same as for a single server architecture as discussed in the Static content section of this whitepaper The benefits are the same as in the single server architecture —you can offload the work associated with serving your static assets to Amazon S3 and CloudFront enabling your web servers to focus on generating dynamic content onl y and serve more user requests per web server Amazon Web Services Best Practices for WordPres s on AWS Page 13 Data tier (Amazon Aurora and Amazon ElastiCache) With the WordPress installation stored on a distributed scalable shared network file system and static assets being served from Amazon S3 you can focus your attention on the remaining stateful component: the database As with the storage tier the database should not be reliant on any single server so it cannot be hosted on one of the web servers Instead host the WordPress database on Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high end commercial databases with the simplicity and cost effectivenes s of open source databases Aurora MySQL increases MySQL performance and availability by tightly integrating the database engine with a purpose built distributed storage system backed by SSD It is faulttolerant and self healing replicates six copies of your data across three Availability Zones is designed for greater than 9999% availability and nearly continuously backs up your data in Amazon S3 Amazon Aurora is designed to automatically detect database crashes and restart without the need for crash recovery or to rebuild the database cache Amazon Aurora provides a number of instance types to suit different application profiles including memory optimized and burstable instances To improve the performance of your database you can select a large instance type to provide more CPU and memory resources Amazon Aurora automatically handles failover between the primary instance and Aurora Replicas so that your applications can resume database operations as quickly as possible without manual administrative intervention Failover typically takes less than 30 seconds After you have created at least one Aurora Replica connect to your primary instance using the cluster endpoint to enable your application to automatically fail over in the event the primary instance fails You can create up to 15 low latency read replica s across three Availability Zones As your database scales your database cache will also need to scale As discussed previously in the Database caching section of this document ElastiCache has features to scale the cache across multiple nodes in an ElastiCache cluster and across multiple Availability Zones in a Region for improved availability As you scale your ElastiCache cluster ensure that you configure your caching plugin to connect using the configuration endpoint so that WordPress can use new cluster nodes as they are added and stop Amazon Web Services Best Practices for WordPres s on AWS Page 14 using old cluster nodes as they are removed You must also set up your web servers to use the ElastiCache Cluster Client for PHP and update your AMI to store this change WordPress high availability by Bitnami on AWS Quick Start s Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS based on AWS best practices for security and high availability These accelerators reduce hundreds of manual procedures into just a few steps so you can build your production environment quickly and start using it immediately Each Quick Start includes AWS CloudFormation templates that automate the deployment and a guide that discusses the architecture and provides step bystep deployment instructions WordPress High Availability by Bitnami on AWS Quick Starts sets up the following configurable environment on AWS: • A highly available architecture that spans two Availability Zones* • A virtual private cloud (VPC) configured with publ ic and private subnets according to AWS best practices This provides the network infrastructure for your deployment* • An internet gateway to provide access to the internet This gateway is used by the bastion hosts to send and receive traffic* • In the pub lic subnets managed NAT gateways to allow outbound internet access for resources in the private subnets* • In the public subnets Linux bastion hosts in an Auto Scaling group to allow inbound Secure Shell (SSH) access to EC2 instances in public and private subnets* • Elastic Load Balancing to distribute HTTP and HTTPS requests across multiple WordPress instances • In the private subnets EC2 instances that host the WordPress application on Apache These instances are provisioned in an Auto Scaling group to en sure high availability • In the private subnets Amazon Aurora DB instances administered by Amazon Relational Database Service (Amazon RDS) Amazon Web Services Best Practices for WordPres s on AWS Page 15 • In the private subnets Amazon Elastic File System (Amazon EFS) to share assets (such as plugins themes and images ) across WordPress instances • In the private subnets Amazon ElastiCache for Memcached nodes for caching database queries * The template that deploys the Quick Start into an existing VPC skips the tasks marked by asterisks and prompts you for your existing VPC configuration WordPress high availability architecture by Bitnami A detailed description of deploying WordPress High Availability by Bitnami on AWS is beyond the scope of this document For configuration and options refer to WordPress High Availability by Bitnami on AWS Amazon Web Services Best Practices for WordPres s on AWS Page 16 Conclusion AWS presents many architecture options for running WordPress The simplest option is a single server installatio n for low traffic websites For more advanced websites site administrators can add several other options each one representing an incremental improvement in terms of availability and scalability Administrators can select the features that most closely m atch their requirements and their budget Contributors Contributors to this document include : • Paul Lewis Solutions Architect Amazon Web Services • Ronan Guilfoyle Solutions Architect Amazon Web Services • Andreas Chatzakis Solutions Architect Manager Ama zon Web Services • Jibril Touzi Technical Account Manager Amazon Web Services • Hakmin Kim Migration Partner Solutions Architect Amazon Web Services Document revisions Date Description October 19 2021 Updated to modify Reference Architecture and AWS for WordPress plugin October 2019 Updated to include new deployment approaches and AWS for WordPress plugin February 2018 Updated to clarify Amazon Aurora product messaging December 2017 Updated to include AWS services launched since first publication December 2014 First publication Amazon Web Services Best Practices for WordPres s on AWS Page 17 Appendix A: CloudFront configuration To get optimal performance and efficiency when using Amazon CloudFront with your WordPress website it’s important to configure the website correctly for the different types of content being served Origins and behaviors An origin is a location where CloudFront sends requests for content that it distributes through the edge locations Depending on your implemen tation you can have one or two origins One for dynamic content (the Lightsail instance in the single server deployment option or the Application Load Balancer in the elastic dep loyment option ) using a custom origin You may have a second origin to direct CloudFront to for your static content In the preceding reference architecture this is an S3 bucket When you use Amazon S3 as an orig in for your distribution you need to use a bucket policy to make the content publicly accessible Behaviors enable you to set rules that govern how CloudFront caches your content and in turn determine how effective the cache is Behaviors enable you to control the protocol and HTTP methods your website is accessible by They also enable you to control whether to pass HTTP headers cookies or query strings to your backend (and if so which ones) Behaviors apply to specific URL path patte rns CloudFront distribution creation Create a CloudFront web distribution by following the Distribution the default Origin and Behavior automatically created will be used for dynamic content Create four additional behaviors to further customize the way both static and dynamic requests are treated The following table summarizes the configuration properties for the five behaviors You can also skip this manual configuration and use the AWS for WordPress plugin covered in Appendix B: Plugins Installation and Configuration which is the easiest way to configure CloudFront to accelerate your WordPress site Amazon Web Services Best Practices for WordPres s on AWS Page 18 Table 1: Summary of configuration propert ies for CloudFront behaviors Property Static Dynamic (admin) Dynamic (front end) Paths (Behaviors) wp content/* wp includes/* wpadmin/* wploginphp default (*) Protocols HTTP and HTTPS Redirect to HTTPS HTTP and HTTPS HTTP methods GET HEAD ALL ALL HTTP headers NONE ALL Host CloudFront Forwarded Proto CloudFront IsMobile Viewer CloudFront IsTablet Viewer CloudFront IsDesktop Viewer Cookies NONE ALL comment_* wordpress_* wpsettings* Query Strings YES (invalidation) YES YES For the default behavior AWS recommend s the following configuration: • Allow the Origin Protocol Policy to Match Viewer so that if viewers connect to CloudFront using HTTPS CloudFront connect s to your origin using HTTPS as well achieving end toend encryption Note that this requires you install a trusted SSL certificate on the load balancer For details refer to Requiring HTTPS for Communication Between CloudFront and Your Custom Origin • Allow all HTTP methods since the dynamic portions of the website require both GET and POST requests (for example to support POST for the comment submission forms) Amazon Web Services Best Practices for WordPres s on AWS Page 19 • Forward only the cookies that vary the WordPress output for example wordpress_* wpsettings* and comment_* You must extend that list if you have installed any plugins that depend on other cookies not in the list • Forward only the HTTP headers that affect the output of WordPress for example Host CloudFront Forwarded Proto CloudFront isDesktop Viewer CloudFront isMobileViewer and CloudFront isTablet Viewer : o Host allows multiple WordPress websites to be hosted on the same origin o CloudFront Forwarded Proto allows different versions of pages to be cached depending on whether they are accessed via HTTP or HTTPS o CloudFront isDesktopViewer CloudFront isMobileViewer and CloudFront isTabletViewer allow you to customize the output of your themes based on the end user’s device type • Forward all the query strings to cache based on their values because WordPress relies on these they can also be used to invalidate cached objects If you w ant to serve your website under a custom domain name (not *cloudfrontnet ) enter the appropriate URIs under Alternate Domain Names in the Distribution Settings In this case you also need an SSL certificate for your custom domain name You can request SSL certificates via the AWS Certificate Manager and configure them against a CloudFront distribution Now cr eate two more cache behaviors for dynamic content: one for the login page (path pattern: wploginphp ) and one for the admin dashboard (path pattern: wp admin/* ) These two behaviors have the exact same settings as follows: • Enforce a Viewer Protocol Policy of HTTPS Only • Allow all HTTP methods • Cache based on all HTTP headers • Forward all cookies • Forward and cache based on all query strings The reason behind this configuration is that this section of the website is highly personalized and typically has just a few users so caching efficiency isn’t a primary concern The focus is to keep the configuration simple to ensure maximum compatibility with any installed plugins by passing all cookies and headers to the origin Amazon Web Services Best Practices for WordPres s on AWS Page 20 The AWS for WordPress plugin covered in Appendix B automatically creates a CloudFront distribution that meets the preceding configuration By default WordPress stores everything locally on the web server which is block storage ( Amazon EBS) for single server deployment and file sto rage ( Amazon EFS) for elastic deployment In addition to reducing storage and data transfer costs moving static asset s to Amazon S3 offers scalability data availability security and performance There are several plugins that make it easy to move static content to Amazon S3; one of them is W3 Total Cache also covered in Appendix B Appendix B: Plugins installation and configuration AWS for WordPress plugin The AWS for WordPress plugin is the only WordPress plugin written and actively maintained by AWS It enable s customers to easily configure Amazon CloudFront and AWS Certificate Manager (ACM) to WordP ress websites for enhanced performance and security The plugin uses Amazon Machine Learning (ML) services to translate content into one or more languages produce s audio versions of each translation and read s WordPress websites through Amazon Alexa devices The plugin is installed already in WordPress High Availabili ty by Bitnami on AWS Quick Start Plugin installation and configuration To install the plugin : 1 To use the AWS for WordPress plugin you must create an IAM user for the plugin An IAM user is a person or application under an AWS account that has permission to make API calls to AWS services Amazon Web Services Best Practices for WordPres s on AWS Page 21 2 You need an AWS Identity and Access Management (IAM) role or an IAM user to control authentication and authorization for your AWS account To prevent unauthorized users from g aining these permissions protect the IAM user's credentials Treat the secret access key like a password; store it in a safe place and don't share it with anyone Like a password rotate the access key periodically If the secret access key is accidentally leaked delete it immediately Then you can create a new access key to use with the AWS for WordPress plugin 3 In the Plugins menu of the WordPress admin panel search AWS for WordPress and choose Install Now 4 If the plugin installation is not working there may be a user permission problem Connect to WordPress web server and complete the following instructions to solve the issue a Open The wpconfigphp file in the WordPress install directory and write the following code a t the end of the wpconfigphp file: define('FS_METHOD''direct'); b Launch the following command to give writing permission: Amazon Web Services Best Practices for WordPres s on AWS Page 22 chmod 777 <WordPress install directory>/wp content Warning : Keeping the writing permission as 777 is risky If the permission is kept as 777 anyone can edit or delete this folder Change the writing permission into 755 or below after completing the plugin work c If the reference architecture is used the WordPress install directory is `/var/www/wordpress/<site directory> ` A detailed description of all AWS for WordPress settings is beyond the scope of this document For configuration and options refer to Getting started with the AWS for WordPress plugin Amazon CloudFront and AWS Certificate Manager To set up CloudFront and AWS Certificate Manager : 1 On the plugin menu choose CloudFront and enter the following parameters : o Origin domain name: DNS domain of the HTTP origin server where CloudFront get s your website's content (such as examplecom ) o Alternate domain name (CNAME): domain name that your visitors use for the accelerated website experience AWS recommend s using 'www' in front of the domain (such as wwwexamplecom ) 2 Choose Initiate Setup to start the configuration The plugin automatically request s an SSL certificate for the CNAME via ACM once you valida te the ACM token by updating the DNS records with the CNAME entries the plugin will create a CloudFront distribution that meets the best practices defined in Appendix A Note: AWS for WordPress plugin requires HTTPS for communication between CloudFront and your custom origin Make sure your origin has an SSL certificate valid for the Origin domain name For more informat ion refer to Using HTTPS with CloudFront Amazon Web Services Best Practices for WordPres s on AWS Page 23 Translate and vocalize your content The AWS for WordPress plugin enable s you to autom atically translate text in different languages and convert the written content into multilingual audio formats These features are powered by Amazon Machine Learning services Amazon Polly is a service that tu rns text into lifelike speech With dozens of voices across a variety of languages you can select the ideal voice and build engaging speech enabled applications that work in many different countries Use the plugin to create audio files in any of the voic es and languages supported by Amazon Polly Your visitors can stream the audio at their convenience using inline audio players and mobile applications By default the plugin stores new audio files on your web server You can choose to store the files on A mazon S3 or on Amazon CloudFront Users have the same listening experience regardless of where you store your audio files Only the broadcast location changes: • For audio files stored on the WordPress server files are broadcast directly from the server • For files stored in an S3 bucket files are broadcast from the bucket • If you use CloudFront the files are stored on Amazon S3 and are broadcast with CloudFront Broadcast location Amazon Web Services Best Practices for WordPres s on AWS Page 24 Amazon Transla te is a machine translation service that delivers fast high quality and affordable language translation Providing multilingual content represents a great opportunity for site owners Although English is the dominant language of the web native English s peakers are a mere 26% of the total online audience By offering written and audio versions of your WordPress content in multiple languages you can meet the needs of a larger international audience You can configure the plugin to do the following: • Automa tically translate into different languages and create audio recordings of each translation for new content upon publication or choose to translate and create recordings for individual posts • Translate into different languages and create audio recordings fo r each translation of your archived content • Use the Amazon Pollycast RSS feed to podcast audio content Overview of content translation and text to speech Amazon Web Services Best Practices for WordPres s on AWS Page 25 Podcasting with Amazon Pollycast With Amazon Pollycast feeds your visitors can listen to your audio content using standard podcast applications RSS 20 compliant Pollycast feeds provide the XML data needed to aggregate podcasts by popular mobile podcast applications such as iTunes and podcast directories When you install the AWS for WordPress plugin you will find option to enable generation of XML feed in the Podcast configuration tab There you will also find option to configure multiple optional properties After enabling the functionality you will r eceive a link do the feed Reading your content through Amazon Alexa devices You can extend WordPress websites and blogs through Alexa devices This opens new possibilities for the creators and authors of websites to reach an even broader audience It also makes it easier for people to listen to their favorite blogs by just asking Alexa to read them To expose the WordPress website to Alexa you must enable : • AWS for WordPress plugin • The text tospeech and Amazon Pollycast functionalities Th ese functionali ties generate an RSS feed on your WordPress site which is consumed by Amazon Alexa • Amazon S3 as the default storage for your files in text tospeech it’s important that your website uses a secure HTTPS connection to expose its feed to Alexa The followin g diagram presents the flow of interactions and components that are required to expose your website through Alexa Amazon Web Services Best Practices for WordPres s on AWS Page 26 Flow of interactions required to expose WordPress websites through Alexa 1 The user invokes a new Alexa skill for example by saying: “Alexa ask Demo Blog for the latest update ” The skill itself is created using one of the Alexa Skill Blueprints This enable s you to expose your skill through Alexa devices even if you don’t have deep technical knowledge 2 The Alexa skill analyzes the call and RSS feed that was generated by the AWS for WordPress plugin and then returns the link to the audio version of the latest article 3 Based on the link provided by the feed Alexa reads the article by playing the audio file saved on Amazon S3 Refer to the plugin page on WordPress marketplace for a detailed step bystep guide for installing and configuring the plugin and its functiona lities Static content configuration By default WordPress stores everything locally on the web server which is block storage ( Amazon EBS) for single server deployment and file storage ( Amazon EFS) for elastic deployment In addition to reducing storage and data transfer costs moving static asset to Amazon S3 offers scalability data availability security and performance In this example the W3 Total Cache (W3TC) plugin is used to store static assets on Amazon S3 However there are other plugins avail able with similar capabilities If you want to use an alternative you can adjust the following steps accordingly The steps only refer to features or settings relevant to this example A detailed description of all settings is beyond the scope of this docu ment Refer to the W3 Total Cache plugin page at wordpressorg for more information Amazon Web Services Best Practices for WordPres s on AWS Page 27 IAM user creation You need to create an IAM user for the WordPress plugin to store static assets in Amazon S3 For instructions refer to Creating an IAM User in Your AWS Account Note: IAM roles provide a better way of managing access to AWS resources but at the time of writing the W3 Total Cache plugin does not support IAM roles Take a n ote of the user security credentials and store them in a secure manner – you need these credentials later Amazon S3 bucket creation 1 First create an Amazon S3 bucket in the AWS Region of your choice For instructions refer to Creating a bucket Enable static website hosting for the bucket by following the guide for Configu ring a static website on Amazon S3 2 Create an IAM policy to provide the IAM user created previously access to the specified S3 bucket and attach the policy to the IAM user For instructions to create the following policy refer to Managing IAM Policies { "Version": "2012 1017 " "Statement": [ { "Sid": "Stmt1389783689000" "Effect": "Allow" "Principal": "*" "Action": [ "s3:DeleteObject" "s3:GetObject" "s3:GetObjectAcl" "s3:ListBucket" "s3:PutObject" "s3:PutObjectAcl" ] "Resource": [ "arn:aws:s3:::wp demo" "arn:aws:s3:::wp demo/*" ] } Amazon Web Services Best Practices for WordPres s on AWS Page 28 ] } 3 Install and activate the W3TC plugin from the WordPress a dmin panel 4 Browse to the General Settings section of the plugin’s configuration and ensure that both Browser Cache and CDN are enabled 5 From the dropdown list in the CDN configuration choose Origin Push: Amazon CloudFront (this option has Amazon S3 as its origin) 6 Browse to the Browser Cache section of the plugin’s configuration and enable the expires cache control and entity tag (ETag) headers 7 Also activate the Prevent caching of objects after settings change option so that a new query string is generated and appended to objects whenever any settings are changed 8 Browse to the CDN section of the plugin’s configuration and enter the security credentials of the IAM user you created earlier as well as the name of the S3 bucket 9 If you are serving your website via the CloudFront URL enter the distribution domain name in the relevant box Otherwise enter one or more CNAMEs for your custom domain name(s) 10 Finally export the media library and upload the wp includes theme files and custo m files to Amazon S3 using the W3TC plugin These upload functions are available in the General section of the CDN configuration page Static origin creation Now that the static files are stored on Amazon S3 go back to the CloudFront configuration in the CloudFront console and configure Amazon S3 as the origin for static content To do that add a second origin pointing to the S3 bucket you created for that purpose Then create two more cache behaviors one for each of the two folders (wpcontent and wpincludes ) that should use the S3 origin rather than the default origin for dynamic content Configure both in the same manner: • Serve HTTP GET requests only • Amazon S3 does not vary its output based on cookies or HTTP headers so you can improve caching efficiency by not forwarding them to the origin via CloudFront Amazon Web Services Best Practices for WordPres s on AWS Page 29 • Despite the fact that these behaviors serve only static content (which accepts no parameters) you will forward query strings to the origin This is so that you can use query strings a s version identifiers to instantly invalidate for example older CSS files when deploying new versions For more information refer to the Amazon Clou dFront Developer Guide Note: After adding the static origin behaviors to your CloudFront distribution check the order to ensure the behaviors for wpadmin/* and wploginphp have higher precedence than the behaviors for static content Otherwise you may see strange behavior when accessing your admin panel Appendix C: Backup and recovery Recovering from failure in AWS is faster and easier to do compared to traditional hosting environments For example you can launch a replacement instance in minutes in response to a hardware failure or you can make use of automated failover in many of our managed services to negate the impact of a reboot due to routine maintenance However you still need to ensure you are backing up the right data in order to successfu lly recover it To reestablish the availability of a WordPress website you must be able to recover the following components: • Operating system (OS) and services installation and configuration (Apache MySQL and so on ) • WordPress application code and configuration • WordPress themes and plugins • Uploads (for example media files for posts) • Database content (posts comments and so on ) AWS provides a variety of methods for backing up and restoring your web application data and assets This whitepaper previously discussed making use of Lightsail snapshots to protect all data stored on the instance’s local storage If your WordPress website runs off the Lightsail instance only regular Lightsail snapshots should be sufficient for you to recover your WordPres s website in its entirety However you will still lose any changes Amazon Web Services Best Practices for WordPres s on AWS Page 30 applied to your website since the last snapshot was taken if you do restore from a snapshot In a multi server deployment you need to back up each of the components discussed earlier usin g different mechanisms Each component may have a different requirement for backup frequency for example the OS and WordPress installation and configuration will change much less frequently than user generated content and therefore can be backed up les s frequently without losing data in the event of a recovery To back up the OS and services installation and configuration and the WordPress application code and configuration you can create an AMI of a properly configured EC2 instance AMIs can serve tw o purposes: to act as a backup of instance state and to act as a template when launching new instances To back up the WordPress application code and configuration you need to make use of AMIs and also Aurora backups To back up the WordPress themes and plugins installed on your website back up the Amazon S3 bucket or the Amazon EFS file system they are stored on • For themes and plugins stored in an S3 bucket you can enable Cross Region Replication so that all objects uploaded to your primary bucket are automatically replicated to your backup bucket in another AWS Region Cross Region Replication requires that Versioning is enabled on both your source and destination buckets which provides you with an additional layer of protection and enable s you to revert to a previous version of any given object in your bucket • For themes and plugins stored on an EFS file system you can create an AWS Data Pipeline to copy data from your production EFS file system to another EFS file system as outlined in the documentation page Using AWS Backup w ith Amazon EFS You can also back up an EFS file system using any backup application you are already familiar with • To back up user uploads you should follow the steps outlined earlier for backing up the WordPress themes and plugins Amazon Web Services Best Practices for WordPres s on AWS Page 31 • To back up database co ntent you need to make use of Aurora backup Aurora backs up your cluster volume automatically and retains restore data for the length o f the backup retention period Aurora backups are nearly continuous and incremental so you can quickly restore to any point within the backup retention period No performance impact or interruption of database service occurs as backup data is being written You can specify a backup retention period from 1 to 35 days You can also create manual database snapshots which persist until you delete them Manual databa se snapshots are useful for long term backups and archiving Appendix D: Deploying new plugins and themes Few websites remain static In most cases you will periodically add publicly available WordPress themes and plugins or upgrade to a newer WordPress v ersion In other cases you will develop your own custom themes and plugins from scratch Any time you are making a structural change to your WordPress installation there is a certain risk of introducing unforeseen problems At the very least take a backu p of your application code configuration and database before applying any significant change (such as installing a new plugin) For websites of business or other value test those changes in a separate staging environment first With AWS it’s easy to replicate the configuration of your production environment and run the whole deployment process in a safe manner After you are done with your tests you can simply tear down your test environment and stop paying for those resources Later this whit epaper discuss es some WordPress specific considerations Some plugins write configuration information to the wp_options database table (or introduce database schema changes) whereas others create configuration files in the WordPress installation directory Beca use we have moved the database and storage to shared platforms these changes are immediately available to all of your running instances without any further effort on your part When deploying new themes in WordPress a little more effort may be required If you are only making use of Amazon EFS to store all your WordPress installation files then new themes will be immediately available to all running instances However if you are offloading static content to Amazon S3 you must process a copy of these t o the right bucket location Plugins like W3 Total Cache provide a way for you to manually initiate that task Alternatively you could automate this step as part of a build process Amazon Web Services Best Practices for WordPres s on AWS Page 32 Because theme assets can be cached on CloudFront and at the browser you need a way to invalidate older versions when you deploy changes The best way to achieve this is by including some sort of version identifier in your object This identifier can be a query string with a date time stamp or a random string If you use the W3 Total Cache plugin you can update a media query string that is appended to the URLs of media files
|
General
|
consultant
|
Best Practices
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.